Dec 12 17:18:53.379179 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:18:53.379201 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:18:53.379211 kernel: KASLR enabled Dec 12 17:18:53.379216 kernel: efi: EFI v2.7 by EDK II Dec 12 17:18:53.379222 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 17:18:53.379228 kernel: random: crng init done Dec 12 17:18:53.379235 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 17:18:53.379241 kernel: secureboot: Secure boot enabled Dec 12 17:18:53.379248 kernel: ACPI: Early table checksum verification disabled Dec 12 17:18:53.379254 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 17:18:53.379260 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:18:53.379266 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379272 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379279 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379288 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379294 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379301 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379307 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379314 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379320 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:18:53.379327 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:18:53.379333 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:18:53.379341 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:18:53.379348 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 17:18:53.379354 kernel: Zone ranges: Dec 12 17:18:53.379361 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:18:53.379367 kernel: DMA32 empty Dec 12 17:18:53.379373 kernel: Normal empty Dec 12 17:18:53.379379 kernel: Device empty Dec 12 17:18:53.379385 kernel: Movable zone start for each node Dec 12 17:18:53.379392 kernel: Early memory node ranges Dec 12 17:18:53.379398 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 17:18:53.379405 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 17:18:53.379411 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 17:18:53.379419 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 17:18:53.379425 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 17:18:53.379431 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 17:18:53.379438 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 17:18:53.379444 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 17:18:53.379450 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:18:53.379460 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:18:53.379467 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:18:53.379474 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 17:18:53.379480 kernel: psci: probing for conduit method from ACPI. Dec 12 17:18:53.379487 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:18:53.379494 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:18:53.379501 kernel: psci: Trusted OS migration not required Dec 12 17:18:53.379507 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:18:53.379516 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:18:53.379523 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:18:53.379530 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:18:53.379538 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:18:53.379545 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:18:53.379551 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:18:53.379558 kernel: CPU features: detected: Spectre-v4 Dec 12 17:18:53.379565 kernel: CPU features: detected: Spectre-BHB Dec 12 17:18:53.379572 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:18:53.379578 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:18:53.379585 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:18:53.379593 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:18:53.379600 kernel: alternatives: applying boot alternatives Dec 12 17:18:53.379607 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:18:53.379614 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:18:53.379621 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:18:53.379628 kernel: Fallback order for Node 0: 0 Dec 12 17:18:53.379635 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:18:53.379641 kernel: Policy zone: DMA Dec 12 17:18:53.379648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:18:53.379655 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:18:53.379663 kernel: software IO TLB: area num 4. Dec 12 17:18:53.379670 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:18:53.379677 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 17:18:53.379684 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:18:53.379691 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:18:53.379699 kernel: rcu: RCU event tracing is enabled. Dec 12 17:18:53.379706 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:18:53.379713 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:18:53.379719 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:18:53.379726 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:18:53.379733 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:18:53.379740 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:18:53.379749 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:18:53.379755 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:18:53.379762 kernel: GICv3: 256 SPIs implemented Dec 12 17:18:53.379769 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:18:53.379775 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:18:53.379791 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:18:53.379799 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:18:53.379806 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:18:53.379813 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:18:53.379820 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:18:53.379827 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:18:53.379836 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:18:53.379843 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:18:53.379850 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:18:53.379857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:18:53.379864 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:18:53.379871 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:18:53.379878 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:18:53.379885 kernel: arm-pv: using stolen time PV Dec 12 17:18:53.379892 kernel: Console: colour dummy device 80x25 Dec 12 17:18:53.379901 kernel: ACPI: Core revision 20240827 Dec 12 17:18:53.379908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:18:53.379915 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:18:53.379922 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:18:53.379929 kernel: landlock: Up and running. Dec 12 17:18:53.379936 kernel: SELinux: Initializing. Dec 12 17:18:53.379954 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:18:53.379961 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:18:53.380058 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:18:53.380066 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:18:53.380074 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:18:53.380081 kernel: Remapping and enabling EFI services. Dec 12 17:18:53.380088 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:18:53.380095 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:18:53.380102 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:18:53.380112 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:18:53.380120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:18:53.380133 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:18:53.380141 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:18:53.380149 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:18:53.380156 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:18:53.380164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:18:53.380171 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:18:53.380179 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:18:53.380188 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:18:53.380195 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:18:53.380203 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:18:53.380211 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:18:53.380219 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:18:53.380229 kernel: SMP: Total of 4 processors activated. Dec 12 17:18:53.380236 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:18:53.380245 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:18:53.380252 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:18:53.380260 kernel: CPU features: detected: Common not Private translations Dec 12 17:18:53.380268 kernel: CPU features: detected: CRC32 instructions Dec 12 17:18:53.380275 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:18:53.380285 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:18:53.380292 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:18:53.380300 kernel: CPU features: detected: Privileged Access Never Dec 12 17:18:53.380308 kernel: CPU features: detected: RAS Extension Support Dec 12 17:18:53.380316 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:18:53.380323 kernel: alternatives: applying system-wide alternatives Dec 12 17:18:53.380331 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:18:53.380342 kernel: Memory: 2448804K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 101148K reserved, 16384K cma-reserved) Dec 12 17:18:53.380350 kernel: devtmpfs: initialized Dec 12 17:18:53.380358 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:18:53.380366 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:18:53.380374 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:18:53.380382 kernel: 0 pages in range for non-PLT usage Dec 12 17:18:53.380390 kernel: 515184 pages in range for PLT usage Dec 12 17:18:53.380398 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:18:53.380407 kernel: SMBIOS 3.0.0 present. Dec 12 17:18:53.380415 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:18:53.380423 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:18:53.380431 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:18:53.380438 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:18:53.380446 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:18:53.380454 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:18:53.380464 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:18:53.380472 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 12 17:18:53.380479 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:18:53.380486 kernel: cpuidle: using governor menu Dec 12 17:18:53.380494 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:18:53.380502 kernel: ASID allocator initialised with 32768 entries Dec 12 17:18:53.380509 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:18:53.380518 kernel: Serial: AMBA PL011 UART driver Dec 12 17:18:53.380526 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:18:53.380533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:18:53.380553 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:18:53.380561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:18:53.380568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:18:53.380577 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:18:53.380586 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:18:53.380594 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:18:53.380601 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:18:53.380609 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:18:53.380617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:18:53.380624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:18:53.380632 kernel: ACPI: Interpreter enabled Dec 12 17:18:53.380639 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:18:53.380648 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:18:53.380656 kernel: ACPI: CPU0 has been hot-added Dec 12 17:18:53.380663 kernel: ACPI: CPU1 has been hot-added Dec 12 17:18:53.380671 kernel: ACPI: CPU2 has been hot-added Dec 12 17:18:53.380678 kernel: ACPI: CPU3 has been hot-added Dec 12 17:18:53.380686 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:18:53.380694 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:18:53.380704 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:18:53.380898 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:18:53.381012 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:18:53.381194 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:18:53.381287 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:18:53.381371 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:18:53.381387 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:18:53.381395 kernel: PCI host bridge to bus 0000:00 Dec 12 17:18:53.381485 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:18:53.381559 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:18:53.381631 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:18:53.381702 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:18:53.381815 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:18:53.381912 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:18:53.382021 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:18:53.382104 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:18:53.382192 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:18:53.382285 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:18:53.382366 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:18:53.382457 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:18:53.382530 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:18:53.382603 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:18:53.382675 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:18:53.382687 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:18:53.382694 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:18:53.382702 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:18:53.382709 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:18:53.382717 kernel: iommu: Default domain type: Translated Dec 12 17:18:53.382724 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:18:53.382732 kernel: efivars: Registered efivars operations Dec 12 17:18:53.382741 kernel: vgaarb: loaded Dec 12 17:18:53.382748 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:18:53.382756 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:18:53.382764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:18:53.382772 kernel: pnp: PnP ACPI init Dec 12 17:18:53.382877 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:18:53.382891 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:18:53.382899 kernel: NET: Registered PF_INET protocol family Dec 12 17:18:53.382907 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:18:53.382914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:18:53.382922 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:18:53.382930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:18:53.382937 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:18:53.382957 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:18:53.382964 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:18:53.382972 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:18:53.382980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:18:53.382987 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:18:53.382995 kernel: kvm [1]: HYP mode not available Dec 12 17:18:53.383003 kernel: Initialise system trusted keyrings Dec 12 17:18:53.383012 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:18:53.383020 kernel: Key type asymmetric registered Dec 12 17:18:53.383027 kernel: Asymmetric key parser 'x509' registered Dec 12 17:18:53.383035 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:18:53.383042 kernel: io scheduler mq-deadline registered Dec 12 17:18:53.383050 kernel: io scheduler kyber registered Dec 12 17:18:53.383057 kernel: io scheduler bfq registered Dec 12 17:18:53.383065 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:18:53.383074 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:18:53.383082 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:18:53.383170 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:18:53.383181 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:18:53.383188 kernel: thunder_xcv, ver 1.0 Dec 12 17:18:53.383196 kernel: thunder_bgx, ver 1.0 Dec 12 17:18:53.383203 kernel: nicpf, ver 1.0 Dec 12 17:18:53.383213 kernel: nicvf, ver 1.0 Dec 12 17:18:53.383308 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:18:53.383386 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:18:52 UTC (1765559932) Dec 12 17:18:53.383396 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:18:53.383403 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:18:53.383411 kernel: watchdog: NMI not fully supported Dec 12 17:18:53.383420 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:18:53.383427 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:18:53.383435 kernel: Segment Routing with IPv6 Dec 12 17:18:53.383442 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:18:53.383449 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:18:53.383457 kernel: Key type dns_resolver registered Dec 12 17:18:53.383464 kernel: registered taskstats version 1 Dec 12 17:18:53.383473 kernel: Loading compiled-in X.509 certificates Dec 12 17:18:53.383481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:18:53.383489 kernel: Demotion targets for Node 0: null Dec 12 17:18:53.383496 kernel: Key type .fscrypt registered Dec 12 17:18:53.383503 kernel: Key type fscrypt-provisioning registered Dec 12 17:18:53.383511 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:18:53.383518 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:18:53.383527 kernel: ima: No architecture policies found Dec 12 17:18:53.383535 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:18:53.383542 kernel: clk: Disabling unused clocks Dec 12 17:18:53.383550 kernel: PM: genpd: Disabling unused power domains Dec 12 17:18:53.383557 kernel: Freeing unused kernel memory: 12416K Dec 12 17:18:53.383565 kernel: Run /init as init process Dec 12 17:18:53.383572 kernel: with arguments: Dec 12 17:18:53.383580 kernel: /init Dec 12 17:18:53.383588 kernel: with environment: Dec 12 17:18:53.383596 kernel: HOME=/ Dec 12 17:18:53.383603 kernel: TERM=linux Dec 12 17:18:53.383700 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:18:53.383789 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 12 17:18:53.383801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:18:53.383812 kernel: GPT:16515071 != 27000831 Dec 12 17:18:53.383820 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:18:53.383827 kernel: GPT:16515071 != 27000831 Dec 12 17:18:53.383835 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:18:53.383842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:18:53.383850 kernel: SCSI subsystem initialized Dec 12 17:18:53.383858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:18:53.383867 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:18:53.383875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:18:53.383882 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:18:53.383890 kernel: raid6: neonx8 gen() 15420 MB/s Dec 12 17:18:53.383898 kernel: raid6: neonx4 gen() 15578 MB/s Dec 12 17:18:53.383905 kernel: raid6: neonx2 gen() 12871 MB/s Dec 12 17:18:53.383913 kernel: raid6: neonx1 gen() 9799 MB/s Dec 12 17:18:53.383922 kernel: raid6: int64x8 gen() 6522 MB/s Dec 12 17:18:53.383930 kernel: raid6: int64x4 gen() 7349 MB/s Dec 12 17:18:53.383937 kernel: raid6: int64x2 gen() 6084 MB/s Dec 12 17:18:53.384042 kernel: raid6: int64x1 gen() 5011 MB/s Dec 12 17:18:53.384051 kernel: raid6: using algorithm neonx4 gen() 15578 MB/s Dec 12 17:18:53.384059 kernel: raid6: .... xor() 12350 MB/s, rmw enabled Dec 12 17:18:53.384066 kernel: raid6: using neon recovery algorithm Dec 12 17:18:53.384078 kernel: xor: measuring software checksum speed Dec 12 17:18:53.384099 kernel: 8regs : 21596 MB/sec Dec 12 17:18:53.384107 kernel: 32regs : 21676 MB/sec Dec 12 17:18:53.384114 kernel: arm64_neon : 28089 MB/sec Dec 12 17:18:53.384122 kernel: xor: using function: arm64_neon (28089 MB/sec) Dec 12 17:18:53.384132 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:18:53.384140 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (206) Dec 12 17:18:53.384149 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:18:53.384157 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:18:53.384165 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:18:53.384172 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:18:53.384180 kernel: loop: module loaded Dec 12 17:18:53.384187 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:18:53.384196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:18:53.384206 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:18:53.384217 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:18:53.384225 systemd[1]: Detected virtualization kvm. Dec 12 17:18:53.384233 systemd[1]: Detected architecture arm64. Dec 12 17:18:53.384241 systemd[1]: Running in initrd. Dec 12 17:18:53.384249 systemd[1]: No hostname configured, using default hostname. Dec 12 17:18:53.384259 systemd[1]: Hostname set to . Dec 12 17:18:53.384267 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:18:53.384275 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:18:53.384283 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:18:53.384291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:18:53.384300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:18:53.384308 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:18:53.384318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:18:53.384327 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:18:53.384335 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:18:53.384343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:18:53.384351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:18:53.384361 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:18:53.384369 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:18:53.384377 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:18:53.384385 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:18:53.384394 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:18:53.384402 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:18:53.384410 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:18:53.384420 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:18:53.384429 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:18:53.384437 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:18:53.384445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:18:53.384454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:18:53.384469 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:18:53.384479 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:18:53.384488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:18:53.384496 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:18:53.384505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:18:53.384513 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:18:53.384522 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:18:53.384532 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:18:53.384540 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:18:53.384549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:18:53.384557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:18:53.384567 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:18:53.384576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:18:53.384584 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:18:53.384593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:18:53.384601 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:18:53.384610 kernel: Bridge firewalling registered Dec 12 17:18:53.384619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:18:53.384628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:18:53.384662 systemd-journald[345]: Collecting audit messages is enabled. Dec 12 17:18:53.384684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:18:53.384694 systemd-journald[345]: Journal started Dec 12 17:18:53.384713 systemd-journald[345]: Runtime Journal (/run/log/journal/0771cb07a5b545e095d06438b97b5c51) is 6M, max 48.5M, 42.4M free. Dec 12 17:18:53.367419 systemd-modules-load[348]: Inserted module 'br_netfilter' Dec 12 17:18:53.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.391992 kernel: audit: type=1130 audit(1765559933.386:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.392045 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:18:53.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.394610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:18:53.399182 kernel: audit: type=1130 audit(1765559933.391:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.399215 kernel: audit: type=1130 audit(1765559933.395:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.399075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:18:53.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.404274 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:18:53.405574 kernel: audit: type=1130 audit(1765559933.400:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.406000 audit: BPF prog-id=6 op=LOAD Dec 12 17:18:53.407965 kernel: audit: type=1334 audit(1765559933.406:6): prog-id=6 op=LOAD Dec 12 17:18:53.407794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:18:53.409539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:18:53.421003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:18:53.432255 systemd-tmpfiles[374]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:18:53.434540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:18:53.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.441585 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:18:53.444416 kernel: audit: type=1130 audit(1765559933.436:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.444441 kernel: audit: type=1130 audit(1765559933.443:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.444317 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:18:53.453376 kernel: audit: type=1130 audit(1765559933.448:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.453225 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:18:53.470813 systemd-resolved[370]: Positive Trust Anchors: Dec 12 17:18:53.470845 systemd-resolved[370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:18:53.470849 systemd-resolved[370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:18:53.470880 systemd-resolved[370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:18:53.483589 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:18:53.493582 systemd-resolved[370]: Defaulting to hostname 'linux'. Dec 12 17:18:53.494911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:18:53.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.496630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:18:53.502796 kernel: audit: type=1130 audit(1765559933.496:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.574847 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:18:53.584984 kernel: iscsi: registered transport (tcp) Dec 12 17:18:53.600004 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:18:53.600067 kernel: QLogic iSCSI HBA Driver Dec 12 17:18:53.622681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:18:53.643843 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:18:53.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.645703 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:18:53.696177 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:18:53.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.699707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:18:53.701687 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:18:53.741963 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:18:53.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.743000 audit: BPF prog-id=7 op=LOAD Dec 12 17:18:53.743000 audit: BPF prog-id=8 op=LOAD Dec 12 17:18:53.744805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:18:53.776800 systemd-udevd[629]: Using default interface naming scheme 'v257'. Dec 12 17:18:53.786090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:18:53.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.791076 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:18:53.809232 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:18:53.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.810000 audit: BPF prog-id=9 op=LOAD Dec 12 17:18:53.812596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:18:53.818827 dracut-pre-trigger[706]: rd.md=0: removing MD RAID activation Dec 12 17:18:53.845049 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:18:53.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.847416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:18:53.855480 systemd-networkd[733]: lo: Link UP Dec 12 17:18:53.855488 systemd-networkd[733]: lo: Gained carrier Dec 12 17:18:53.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.856142 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:18:53.857297 systemd[1]: Reached target network.target - Network. Dec 12 17:18:53.907988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:18:53.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:53.911768 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:18:53.977837 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:18:53.992532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:18:54.000617 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:18:54.007671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:18:54.011273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:18:54.014216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:18:54.014347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:18:54.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:54.016869 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:18:54.027721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:18:54.032102 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:18:54.033257 systemd-networkd[733]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:18:54.033907 systemd-networkd[733]: eth0: Link UP Dec 12 17:18:54.034235 systemd-networkd[733]: eth0: Gained carrier Dec 12 17:18:54.037726 disk-uuid[803]: Primary Header is updated. Dec 12 17:18:54.037726 disk-uuid[803]: Secondary Entries is updated. Dec 12 17:18:54.037726 disk-uuid[803]: Secondary Header is updated. Dec 12 17:18:54.034246 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:18:54.048032 systemd-networkd[733]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:18:54.054014 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:18:54.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:54.058651 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:18:54.060979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:18:54.062933 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:18:54.067270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:18:54.071070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:18:54.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:54.099933 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:18:54.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.066395 disk-uuid[805]: Warning: The kernel is still using the old partition table. Dec 12 17:18:55.066395 disk-uuid[805]: The new table will be used at the next reboot or after you Dec 12 17:18:55.066395 disk-uuid[805]: run partprobe(8) or kpartx(8) Dec 12 17:18:55.066395 disk-uuid[805]: The operation has completed successfully. Dec 12 17:18:55.072201 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:18:55.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.072317 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:18:55.074396 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:18:55.096089 systemd-networkd[733]: eth0: Gained IPv6LL Dec 12 17:18:55.106677 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Dec 12 17:18:55.106721 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:18:55.106732 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:18:55.110153 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:18:55.110181 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:18:55.115957 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:18:55.118056 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:18:55.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.120130 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:18:55.218218 ignition[855]: Ignition 2.22.0 Dec 12 17:18:55.218234 ignition[855]: Stage: fetch-offline Dec 12 17:18:55.218285 ignition[855]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:18:55.218295 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:18:55.218450 ignition[855]: parsed url from cmdline: "" Dec 12 17:18:55.218453 ignition[855]: no config URL provided Dec 12 17:18:55.218457 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:18:55.218465 ignition[855]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:18:55.218509 ignition[855]: op(1): [started] loading QEMU firmware config module Dec 12 17:18:55.218513 ignition[855]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:18:55.223922 ignition[855]: op(1): [finished] loading QEMU firmware config module Dec 12 17:18:55.270879 ignition[855]: parsing config with SHA512: dd54b5d15f921b85a276adcc2161490b8cba1bf597ac67d1b694b71bf791fb398e2511d98760c7699131d43c0ba0fe9e5bebcae681d534adb5f895ad2d5c2888 Dec 12 17:18:55.276749 unknown[855]: fetched base config from "system" Dec 12 17:18:55.276761 unknown[855]: fetched user config from "qemu" Dec 12 17:18:55.277206 ignition[855]: fetch-offline: fetch-offline passed Dec 12 17:18:55.277263 ignition[855]: Ignition finished successfully Dec 12 17:18:55.279938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:18:55.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.281273 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:18:55.284152 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:18:55.313472 ignition[870]: Ignition 2.22.0 Dec 12 17:18:55.313493 ignition[870]: Stage: kargs Dec 12 17:18:55.313634 ignition[870]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:18:55.316741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:18:55.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.313642 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:18:55.314429 ignition[870]: kargs: kargs passed Dec 12 17:18:55.318793 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:18:55.314476 ignition[870]: Ignition finished successfully Dec 12 17:18:55.349228 ignition[878]: Ignition 2.22.0 Dec 12 17:18:55.349244 ignition[878]: Stage: disks Dec 12 17:18:55.349404 ignition[878]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:18:55.349414 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:18:55.350288 ignition[878]: disks: disks passed Dec 12 17:18:55.352564 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:18:55.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.350343 ignition[878]: Ignition finished successfully Dec 12 17:18:55.354373 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:18:55.355782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:18:55.357401 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:18:55.359062 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:18:55.360822 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:18:55.363479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:18:55.407513 systemd-fsck[889]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 12 17:18:55.550922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:18:55.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.553770 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:18:55.621965 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:18:55.622884 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:18:55.624659 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:18:55.629711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:18:55.632746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:18:55.634236 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:18:55.634280 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:18:55.634310 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:18:55.647745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:18:55.653653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:18:55.659918 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Dec 12 17:18:55.659961 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:18:55.659973 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:18:55.663083 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:18:55.663131 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:18:55.667727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:18:55.705740 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:18:55.711329 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:18:55.717045 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:18:55.721870 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:18:55.806782 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:18:55.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.809281 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:18:55.811914 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:18:55.826540 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:18:55.828958 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:18:55.842079 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:18:55.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.858959 ignition[1012]: INFO : Ignition 2.22.0 Dec 12 17:18:55.858959 ignition[1012]: INFO : Stage: mount Dec 12 17:18:55.860575 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:18:55.860575 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:18:55.860575 ignition[1012]: INFO : mount: mount passed Dec 12 17:18:55.860575 ignition[1012]: INFO : Ignition finished successfully Dec 12 17:18:55.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:55.861554 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:18:55.866195 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:18:56.624431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:18:56.651026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Dec 12 17:18:56.651079 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:18:56.651092 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:18:56.655029 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:18:56.655075 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:18:56.656389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:18:56.695923 ignition[1041]: INFO : Ignition 2.22.0 Dec 12 17:18:56.695923 ignition[1041]: INFO : Stage: files Dec 12 17:18:56.697588 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:18:56.697588 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:18:56.697588 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:18:56.700724 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:18:56.700724 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:18:56.700724 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:18:56.704702 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:18:56.704702 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:18:56.704702 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:18:56.704702 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:18:56.701192 unknown[1041]: wrote ssh authorized keys file for user: core Dec 12 17:18:56.751914 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:18:56.868714 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:18:56.868714 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:18:56.872639 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:18:56.884335 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:18:56.884335 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:18:56.884335 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:18:56.884335 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:18:56.891650 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:18:56.891650 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:18:57.249671 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:18:57.570990 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:18:57.570990 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:18:57.576232 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:18:57.580221 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:18:57.580221 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:18:57.580221 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:18:57.584577 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:18:57.584577 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:18:57.584577 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:18:57.584577 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:18:57.597829 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:18:57.601840 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:18:57.604027 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:18:57.604027 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:18:57.604027 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:18:57.604027 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:18:57.604027 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:18:57.604027 ignition[1041]: INFO : files: files passed Dec 12 17:18:57.604027 ignition[1041]: INFO : Ignition finished successfully Dec 12 17:18:57.618780 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 12 17:18:57.618807 kernel: audit: type=1130 audit(1765559937.606:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.605116 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:18:57.607993 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:18:57.613092 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:18:57.626544 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:18:57.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.631072 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:18:57.634488 kernel: audit: type=1130 audit(1765559937.627:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.634516 kernel: audit: type=1131 audit(1765559937.627:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.626674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:18:57.635829 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:18:57.635829 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:18:57.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.642704 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:18:57.644841 kernel: audit: type=1130 audit(1765559937.637:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.637052 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:18:57.639173 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:18:57.644517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:18:57.702114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:18:57.702256 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:18:57.709918 kernel: audit: type=1130 audit(1765559937.704:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.709976 kernel: audit: type=1131 audit(1765559937.704:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.704458 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:18:57.711007 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:18:57.713198 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:18:57.715733 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:18:57.760048 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:18:57.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.763160 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:18:57.767961 kernel: audit: type=1130 audit(1765559937.761:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.786730 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:18:57.786881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:18:57.789477 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:18:57.793000 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:18:57.800826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:18:57.801043 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:18:57.805404 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:18:57.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.806471 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:18:57.811314 kernel: audit: type=1131 audit(1765559937.804:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.810754 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:18:57.812304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:18:57.813978 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:18:57.815701 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:18:57.817428 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:18:57.819025 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:18:57.820867 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:18:57.822805 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:18:57.824585 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:18:57.826041 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:18:57.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.826202 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:18:57.831270 kernel: audit: type=1131 audit(1765559937.827:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.830496 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:18:57.832160 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:18:57.833823 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:18:57.834718 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:18:57.836659 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:18:57.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.836815 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:18:57.842325 kernel: audit: type=1131 audit(1765559937.838:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.841504 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:18:57.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.841649 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:18:57.843494 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:18:57.844907 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:18:57.845026 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:18:57.847036 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:18:57.848428 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:18:57.850141 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:18:57.850240 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:18:57.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.851960 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:18:57.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.852051 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:18:57.853507 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:18:57.853584 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:18:57.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.855210 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:18:57.855336 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:18:57.857098 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:18:57.857228 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:18:57.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.860013 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:18:57.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.862097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:18:57.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.862807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:18:57.866419 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:18:57.872238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:18:57.872874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:18:57.874199 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:18:57.874312 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:18:57.876620 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:18:57.876742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:18:57.894922 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:18:57.897806 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:18:57.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.903115 ignition[1098]: INFO : Ignition 2.22.0 Dec 12 17:18:57.903115 ignition[1098]: INFO : Stage: umount Dec 12 17:18:57.903115 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:18:57.903115 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:18:57.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.902808 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:18:57.910525 ignition[1098]: INFO : umount: umount passed Dec 12 17:18:57.910525 ignition[1098]: INFO : Ignition finished successfully Dec 12 17:18:57.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.905813 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:18:57.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.905922 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:18:57.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.908382 systemd[1]: Stopped target network.target - Network. Dec 12 17:18:57.909915 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:18:57.910021 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:18:57.911640 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:18:57.911708 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:18:57.913073 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:18:57.913131 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:18:57.914669 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:18:57.914724 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:18:57.916315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:18:57.918191 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:18:57.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.927477 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:18:57.927590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:18:57.939509 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:18:57.939634 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:18:57.939000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:18:57.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.941634 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:18:57.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.941733 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:18:57.946210 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:18:57.946000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:18:57.947744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:18:57.947815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:18:57.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.949654 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:18:57.949725 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:18:57.952379 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:18:57.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.953897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:18:57.953986 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:18:57.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.956740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:18:57.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.956809 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:18:57.959412 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:18:57.959465 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:18:57.961567 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:18:57.993698 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:18:57.996105 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:18:57.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.997615 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:18:57.997659 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:18:57.999321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:18:58.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:57.999359 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:18:58.001184 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:18:58.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.001252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:18:58.003885 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:18:58.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.003956 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:18:58.006658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:18:58.006714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:18:58.010821 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:18:58.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.012684 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:18:58.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.012759 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:18:58.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.015053 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:18:58.015112 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:18:58.017152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:18:58.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.017201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:18:58.020277 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:18:58.022592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:18:58.040868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:18:58.041028 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:18:58.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:58.043229 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:18:58.046171 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:18:58.073359 systemd[1]: Switching root. Dec 12 17:18:58.114267 systemd-journald[345]: Journal stopped Dec 12 17:18:59.069010 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Dec 12 17:18:59.069076 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:18:59.069096 kernel: SELinux: policy capability open_perms=1 Dec 12 17:18:59.069114 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:18:59.069133 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:18:59.069144 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:18:59.069157 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:18:59.069167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:18:59.069177 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:18:59.069187 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:18:59.069199 systemd[1]: Successfully loaded SELinux policy in 75.435ms. Dec 12 17:18:59.069212 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.423ms. Dec 12 17:18:59.069224 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:18:59.069240 systemd[1]: Detected virtualization kvm. Dec 12 17:18:59.069251 systemd[1]: Detected architecture arm64. Dec 12 17:18:59.069263 systemd[1]: Detected first boot. Dec 12 17:18:59.069274 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:18:59.069285 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:18:59.069296 zram_generator::config[1143]: No configuration found. Dec 12 17:18:59.069310 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:18:59.069320 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:18:59.069331 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:18:59.069342 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:18:59.069355 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:18:59.069366 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:18:59.069376 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:18:59.069387 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:18:59.069399 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:18:59.069410 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:18:59.069420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:18:59.069432 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:18:59.069442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:18:59.069454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:18:59.069465 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:18:59.069537 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:18:59.069555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:18:59.069567 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:18:59.069581 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:18:59.069592 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:18:59.069603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:18:59.069613 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:18:59.069624 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:18:59.069635 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:18:59.069651 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:18:59.069663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:18:59.069674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:18:59.069685 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:18:59.069696 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:18:59.069707 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:18:59.069717 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:18:59.069729 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:18:59.069740 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:18:59.069751 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:18:59.069769 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:18:59.069782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:18:59.069794 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:18:59.069805 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:18:59.069816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:18:59.069830 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:18:59.069840 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:18:59.069851 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:18:59.069862 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:18:59.069873 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:18:59.069883 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:18:59.069894 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:18:59.069906 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:18:59.069918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:18:59.069929 systemd[1]: Reached target machines.target - Containers. Dec 12 17:18:59.069955 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:18:59.069968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:18:59.069980 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:18:59.069990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:18:59.070003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:18:59.070014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:18:59.070025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:18:59.070036 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:18:59.070047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:18:59.070059 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:18:59.070072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:18:59.070083 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:18:59.070093 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:18:59.070104 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:18:59.070118 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:18:59.070129 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:18:59.070139 kernel: ACPI: bus type drm_connector registered Dec 12 17:18:59.070150 kernel: fuse: init (API version 7.41) Dec 12 17:18:59.070160 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:18:59.070171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:18:59.070182 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:18:59.070194 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:18:59.070205 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:18:59.070216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:18:59.070253 systemd-journald[1223]: Collecting audit messages is enabled. Dec 12 17:18:59.070277 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:18:59.070290 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:18:59.070301 systemd-journald[1223]: Journal started Dec 12 17:18:59.070322 systemd-journald[1223]: Runtime Journal (/run/log/journal/0771cb07a5b545e095d06438b97b5c51) is 6M, max 48.5M, 42.4M free. Dec 12 17:18:59.072094 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:18:58.906000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:18:59.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.027000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:18:59.027000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:18:59.029000 audit: BPF prog-id=15 op=LOAD Dec 12 17:18:59.030000 audit: BPF prog-id=16 op=LOAD Dec 12 17:18:59.030000 audit: BPF prog-id=17 op=LOAD Dec 12 17:18:59.066000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:18:59.066000 audit[1223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff0695410 a2=4000 a3=0 items=0 ppid=1 pid=1223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:18:59.066000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:18:58.810871 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:18:58.832099 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:18:58.832556 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:18:59.074977 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:18:59.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.077570 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:18:59.078913 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:18:59.080209 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:18:59.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.083023 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:18:59.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.084563 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:18:59.084737 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:18:59.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.086405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:18:59.086590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:18:59.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.088088 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:18:59.088279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:18:59.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.089836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:18:59.090070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:18:59.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.091598 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:18:59.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.091752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:18:59.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.093405 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:18:59.093592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:18:59.095118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:18:59.097185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:18:59.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.099672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:18:59.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.101563 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:18:59.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.115628 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:18:59.117486 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:18:59.119859 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:18:59.122046 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:18:59.123121 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:18:59.123161 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:18:59.125037 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:18:59.126410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:18:59.126519 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:18:59.130001 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:18:59.132378 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:18:59.133601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:18:59.134779 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:18:59.135964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:18:59.138156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:18:59.141859 systemd-journald[1223]: Time spent on flushing to /var/log/journal/0771cb07a5b545e095d06438b97b5c51 is 23.271ms for 1000 entries. Dec 12 17:18:59.141859 systemd-journald[1223]: System Journal (/var/log/journal/0771cb07a5b545e095d06438b97b5c51) is 8M, max 163.5M, 155.5M free. Dec 12 17:18:59.185367 systemd-journald[1223]: Received client request to flush runtime journal. Dec 12 17:18:59.185448 kernel: loop1: detected capacity change from 0 to 211168 Dec 12 17:18:59.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.140830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:18:59.144906 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:18:59.149014 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:18:59.151468 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:18:59.153108 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:18:59.158033 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:18:59.159477 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:18:59.162085 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:18:59.188996 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:18:59.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.192204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:18:59.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.202977 kernel: loop2: detected capacity change from 0 to 100192 Dec 12 17:18:59.212352 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:18:59.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.214890 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:18:59.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.216000 audit: BPF prog-id=18 op=LOAD Dec 12 17:18:59.216000 audit: BPF prog-id=19 op=LOAD Dec 12 17:18:59.216000 audit: BPF prog-id=20 op=LOAD Dec 12 17:18:59.218212 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:18:59.219000 audit: BPF prog-id=21 op=LOAD Dec 12 17:18:59.221016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:18:59.225165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:18:59.225975 kernel: loop3: detected capacity change from 0 to 109872 Dec 12 17:18:59.228000 audit: BPF prog-id=22 op=LOAD Dec 12 17:18:59.228000 audit: BPF prog-id=23 op=LOAD Dec 12 17:18:59.228000 audit: BPF prog-id=24 op=LOAD Dec 12 17:18:59.230049 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:18:59.232000 audit: BPF prog-id=25 op=LOAD Dec 12 17:18:59.233000 audit: BPF prog-id=26 op=LOAD Dec 12 17:18:59.233000 audit: BPF prog-id=27 op=LOAD Dec 12 17:18:59.235175 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:18:59.250971 kernel: loop4: detected capacity change from 0 to 211168 Dec 12 17:18:59.263361 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Dec 12 17:18:59.263379 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Dec 12 17:18:59.265969 kernel: loop5: detected capacity change from 0 to 100192 Dec 12 17:18:59.267617 systemd-nsresourced[1280]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:18:59.269074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:18:59.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.270685 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:18:59.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.283444 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:18:59.286168 kernel: loop6: detected capacity change from 0 to 109872 Dec 12 17:18:59.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.293700 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 12 17:18:59.296985 (sd-merge)[1284]: Merged extensions into '/usr'. Dec 12 17:18:59.301036 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:18:59.301052 systemd[1]: Reloading... Dec 12 17:18:59.345371 systemd-oomd[1277]: No swap; memory pressure usage will be degraded Dec 12 17:18:59.353960 zram_generator::config[1325]: No configuration found. Dec 12 17:18:59.357084 systemd-resolved[1278]: Positive Trust Anchors: Dec 12 17:18:59.357103 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:18:59.357107 systemd-resolved[1278]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:18:59.357142 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:18:59.365373 systemd-resolved[1278]: Defaulting to hostname 'linux'. Dec 12 17:18:59.513750 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:18:59.513859 systemd[1]: Reloading finished in 212 ms. Dec 12 17:18:59.546440 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:18:59.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.547887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:18:59.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.549261 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:18:59.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.553280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:18:59.570344 systemd[1]: Starting ensure-sysext.service... Dec 12 17:18:59.572290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:18:59.573000 audit: BPF prog-id=28 op=LOAD Dec 12 17:18:59.573000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:18:59.573000 audit: BPF prog-id=29 op=LOAD Dec 12 17:18:59.573000 audit: BPF prog-id=30 op=LOAD Dec 12 17:18:59.573000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:18:59.573000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:18:59.574000 audit: BPF prog-id=31 op=LOAD Dec 12 17:18:59.574000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:18:59.574000 audit: BPF prog-id=32 op=LOAD Dec 12 17:18:59.574000 audit: BPF prog-id=33 op=LOAD Dec 12 17:18:59.574000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:18:59.574000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:18:59.575000 audit: BPF prog-id=34 op=LOAD Dec 12 17:18:59.575000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:18:59.575000 audit: BPF prog-id=35 op=LOAD Dec 12 17:18:59.575000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:18:59.576000 audit: BPF prog-id=36 op=LOAD Dec 12 17:18:59.576000 audit: BPF prog-id=37 op=LOAD Dec 12 17:18:59.576000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:18:59.576000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:18:59.576000 audit: BPF prog-id=38 op=LOAD Dec 12 17:18:59.576000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:18:59.576000 audit: BPF prog-id=39 op=LOAD Dec 12 17:18:59.576000 audit: BPF prog-id=40 op=LOAD Dec 12 17:18:59.576000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:18:59.577000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:18:59.588626 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:18:59.588667 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:18:59.589039 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:18:59.590072 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Dec 12 17:18:59.590120 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Dec 12 17:18:59.593357 systemd[1]: Reload requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:18:59.593374 systemd[1]: Reloading... Dec 12 17:18:59.594429 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:18:59.594446 systemd-tmpfiles[1363]: Skipping /boot Dec 12 17:18:59.601220 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:18:59.601233 systemd-tmpfiles[1363]: Skipping /boot Dec 12 17:18:59.646974 zram_generator::config[1395]: No configuration found. Dec 12 17:18:59.789130 systemd[1]: Reloading finished in 195 ms. Dec 12 17:18:59.814965 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:18:59.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.817000 audit: BPF prog-id=41 op=LOAD Dec 12 17:18:59.817000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:18:59.817000 audit: BPF prog-id=42 op=LOAD Dec 12 17:18:59.817000 audit: BPF prog-id=43 op=LOAD Dec 12 17:18:59.817000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:18:59.817000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:18:59.818000 audit: BPF prog-id=44 op=LOAD Dec 12 17:18:59.818000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:18:59.818000 audit: BPF prog-id=45 op=LOAD Dec 12 17:18:59.818000 audit: BPF prog-id=46 op=LOAD Dec 12 17:18:59.818000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:18:59.818000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:18:59.818000 audit: BPF prog-id=47 op=LOAD Dec 12 17:18:59.818000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:18:59.819000 audit: BPF prog-id=48 op=LOAD Dec 12 17:18:59.819000 audit: BPF prog-id=49 op=LOAD Dec 12 17:18:59.819000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:18:59.819000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:18:59.820000 audit: BPF prog-id=50 op=LOAD Dec 12 17:18:59.820000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:18:59.820000 audit: BPF prog-id=51 op=LOAD Dec 12 17:18:59.837000 audit: BPF prog-id=52 op=LOAD Dec 12 17:18:59.837000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:18:59.837000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:18:59.838000 audit: BPF prog-id=53 op=LOAD Dec 12 17:18:59.838000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:18:59.842811 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:18:59.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.851507 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:18:59.853845 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:18:59.864923 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:18:59.869193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:18:59.869000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:18:59.869000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:18:59.870000 audit: BPF prog-id=54 op=LOAD Dec 12 17:18:59.870000 audit: BPF prog-id=55 op=LOAD Dec 12 17:18:59.872014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:18:59.875334 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:18:59.881477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:18:59.883687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:18:59.889930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:18:59.892451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:18:59.893708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:18:59.893912 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:18:59.894014 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:18:59.898214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:18:59.899991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:18:59.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.902983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:18:59.904990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:18:59.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.907632 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:18:59.907871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:18:59.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.910352 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:18:59.910000 audit[1442]: SYSTEM_BOOT pid=1442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.925002 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:18:59.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.927995 systemd[1]: Finished ensure-sysext.service. Dec 12 17:18:59.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.929246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:18:59.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:18:59.933542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:18:59.934273 systemd-udevd[1436]: Using default interface naming scheme 'v257'. Dec 12 17:18:59.934902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:18:59.934000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:18:59.934000 audit[1466]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe3269880 a2=420 a3=0 items=0 ppid=1431 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:18:59.934000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:18:59.935572 augenrules[1466]: No rules Dec 12 17:18:59.937632 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:18:59.942117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:18:59.956121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:18:59.957569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:18:59.957686 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:18:59.957721 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:18:59.959514 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:18:59.960861 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:18:59.961510 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:18:59.963174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:18:59.964863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:18:59.966158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:18:59.967664 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:18:59.968271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:18:59.969769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:18:59.970041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:18:59.971635 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:18:59.971848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:18:59.974647 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:18:59.988060 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:18:59.989563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:18:59.989644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:19:00.066174 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:19:00.069601 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:19:00.076529 systemd-networkd[1497]: lo: Link UP Dec 12 17:19:00.077143 systemd-networkd[1497]: lo: Gained carrier Dec 12 17:19:00.077514 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:19:00.080074 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:19:00.081296 systemd[1]: Reached target network.target - Network. Dec 12 17:19:00.085161 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:19:00.087735 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:19:00.094682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:19:00.098680 systemd-networkd[1497]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:19:00.098691 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:19:00.100515 systemd-networkd[1497]: eth0: Link UP Dec 12 17:19:00.100805 systemd-networkd[1497]: eth0: Gained carrier Dec 12 17:19:00.100827 systemd-networkd[1497]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:19:00.104204 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:19:00.115003 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:19:00.115835 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Dec 12 17:19:00.118963 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:19:00.537934 systemd-resolved[1278]: Clock change detected. Flushing caches. Dec 12 17:19:00.537997 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:19:00.538043 systemd-timesyncd[1475]: Initial clock synchronization to Fri 2025-12-12 17:19:00.535713 UTC. Dec 12 17:19:00.545420 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:19:00.607268 ldconfig[1433]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:19:00.612749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:19:00.627976 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:19:00.635420 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:19:00.658584 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:19:00.672655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:19:00.675194 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:19:00.676347 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:19:00.677569 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:19:00.679076 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:19:00.680231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:19:00.681564 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:19:00.682855 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:19:00.683870 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:19:00.685170 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:19:00.685210 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:19:00.686119 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:19:00.689602 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:19:00.692126 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:19:00.695362 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:19:00.696848 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:19:00.698166 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:19:00.701643 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:19:00.702988 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:19:00.704833 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:19:00.706002 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:19:00.706912 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:19:00.707835 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:19:00.707870 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:19:00.709120 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:19:00.711496 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:19:00.713627 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:19:00.715781 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:19:00.718157 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:19:00.719366 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:19:00.721718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:19:00.723876 jq[1549]: false Dec 12 17:19:00.724246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:19:00.727809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:19:00.732679 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:19:00.734693 extend-filesystems[1550]: Found /dev/vda6 Dec 12 17:19:00.738704 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:19:00.739874 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:19:00.740485 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:19:00.741212 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:19:00.742792 extend-filesystems[1550]: Found /dev/vda9 Dec 12 17:19:00.743526 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:19:00.746019 extend-filesystems[1550]: Checking size of /dev/vda9 Dec 12 17:19:00.751559 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:19:00.753380 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:19:00.755081 jq[1569]: true Dec 12 17:19:00.755497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:19:00.756010 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:19:00.756243 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:19:00.759444 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:19:00.759895 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:19:00.764718 extend-filesystems[1550]: Resized partition /dev/vda9 Dec 12 17:19:00.768222 extend-filesystems[1583]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:19:00.778594 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 12 17:19:00.785494 update_engine[1566]: I20251212 17:19:00.784344 1566 main.cc:92] Flatcar Update Engine starting Dec 12 17:19:00.790135 jq[1580]: true Dec 12 17:19:00.803945 tar[1576]: linux-arm64/LICENSE Dec 12 17:19:00.804190 tar[1576]: linux-arm64/helm Dec 12 17:19:00.807545 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 12 17:19:00.812264 dbus-daemon[1547]: [system] SELinux support is enabled Dec 12 17:19:00.812545 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:19:00.824708 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:19:00.824708 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:19:00.824708 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 12 17:19:00.816521 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:19:00.830762 update_engine[1566]: I20251212 17:19:00.827476 1566 update_check_scheduler.cc:74] Next update check in 2m11s Dec 12 17:19:00.830792 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Dec 12 17:19:00.816555 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:19:00.819892 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:19:00.819931 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:19:00.825057 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:19:00.828021 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:19:00.829069 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:19:00.853165 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:19:00.855666 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:19:00.859362 systemd-logind[1564]: New seat seat0. Dec 12 17:19:00.862413 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:19:00.868715 bash[1614]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:19:00.871007 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:19:00.875462 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:19:00.937224 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:19:00.946368 containerd[1581]: time="2025-12-12T17:19:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:19:00.947563 containerd[1581]: time="2025-12-12T17:19:00.947012714Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:19:00.957605 containerd[1581]: time="2025-12-12T17:19:00.957539474Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.08µs" Dec 12 17:19:00.957605 containerd[1581]: time="2025-12-12T17:19:00.957586114Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:19:00.957723 containerd[1581]: time="2025-12-12T17:19:00.957634874Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:19:00.957723 containerd[1581]: time="2025-12-12T17:19:00.957648434Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:19:00.957823 containerd[1581]: time="2025-12-12T17:19:00.957802194Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:19:00.957850 containerd[1581]: time="2025-12-12T17:19:00.957823874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:19:00.957890 containerd[1581]: time="2025-12-12T17:19:00.957875794Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:19:00.957924 containerd[1581]: time="2025-12-12T17:19:00.957890394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.958247 containerd[1581]: time="2025-12-12T17:19:00.958209434Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.958247 containerd[1581]: time="2025-12-12T17:19:00.958230834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:19:00.958298 containerd[1581]: time="2025-12-12T17:19:00.958251554Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:19:00.958298 containerd[1581]: time="2025-12-12T17:19:00.958261354Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.958642 containerd[1581]: time="2025-12-12T17:19:00.958571914Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.958739 containerd[1581]: time="2025-12-12T17:19:00.958714194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:19:00.959542 containerd[1581]: time="2025-12-12T17:19:00.959339874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.959697 containerd[1581]: time="2025-12-12T17:19:00.959675914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.960204 containerd[1581]: time="2025-12-12T17:19:00.960177594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:19:00.960337 containerd[1581]: time="2025-12-12T17:19:00.960283754Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:19:00.960379 containerd[1581]: time="2025-12-12T17:19:00.960360914Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:19:00.960701 containerd[1581]: time="2025-12-12T17:19:00.960674714Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:19:00.960887 containerd[1581]: time="2025-12-12T17:19:00.960823634Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:19:00.964973 containerd[1581]: time="2025-12-12T17:19:00.964922634Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:19:00.965069 containerd[1581]: time="2025-12-12T17:19:00.964988554Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:19:00.965093 containerd[1581]: time="2025-12-12T17:19:00.965080474Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:19:00.965114 containerd[1581]: time="2025-12-12T17:19:00.965095594Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:19:00.965133 containerd[1581]: time="2025-12-12T17:19:00.965123114Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:19:00.965184 containerd[1581]: time="2025-12-12T17:19:00.965138074Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:19:00.965184 containerd[1581]: time="2025-12-12T17:19:00.965151194Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:19:00.965184 containerd[1581]: time="2025-12-12T17:19:00.965162194Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:19:00.965184 containerd[1581]: time="2025-12-12T17:19:00.965173634Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:19:00.965257 containerd[1581]: time="2025-12-12T17:19:00.965187354Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:19:00.965257 containerd[1581]: time="2025-12-12T17:19:00.965200314Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:19:00.965257 containerd[1581]: time="2025-12-12T17:19:00.965211914Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:19:00.965257 containerd[1581]: time="2025-12-12T17:19:00.965222714Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:19:00.965257 containerd[1581]: time="2025-12-12T17:19:00.965235514Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:19:00.965424 containerd[1581]: time="2025-12-12T17:19:00.965400554Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:19:00.965453 containerd[1581]: time="2025-12-12T17:19:00.965438514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:19:00.965471 containerd[1581]: time="2025-12-12T17:19:00.965458674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:19:00.965494 containerd[1581]: time="2025-12-12T17:19:00.965470394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:19:00.965494 containerd[1581]: time="2025-12-12T17:19:00.965481714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:19:00.965494 containerd[1581]: time="2025-12-12T17:19:00.965491074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:19:00.965564 containerd[1581]: time="2025-12-12T17:19:00.965502994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:19:00.965564 containerd[1581]: time="2025-12-12T17:19:00.965538394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:19:00.965564 containerd[1581]: time="2025-12-12T17:19:00.965550314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:19:00.965564 containerd[1581]: time="2025-12-12T17:19:00.965561554Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:19:00.965636 containerd[1581]: time="2025-12-12T17:19:00.965573474Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:19:00.965636 containerd[1581]: time="2025-12-12T17:19:00.965604674Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:19:00.965670 containerd[1581]: time="2025-12-12T17:19:00.965651194Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:19:00.965670 containerd[1581]: time="2025-12-12T17:19:00.965666394Z" level=info msg="Start snapshots syncer" Dec 12 17:19:00.965728 containerd[1581]: time="2025-12-12T17:19:00.965695714Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:19:00.965986 containerd[1581]: time="2025-12-12T17:19:00.965946634Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:19:00.966105 containerd[1581]: time="2025-12-12T17:19:00.966008274Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:19:00.966105 containerd[1581]: time="2025-12-12T17:19:00.966061274Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:19:00.966190 containerd[1581]: time="2025-12-12T17:19:00.966171394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966199634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966221914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966234274Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966246754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966260114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966271234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966282914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966294954Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:19:00.966355 containerd[1581]: time="2025-12-12T17:19:00.966343914Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966361714Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966371074Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966380874Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966388154Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966398394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966410514Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966427714Z" level=info msg="runtime interface created" Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966433994Z" level=info msg="created NRI interface" Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966442754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966453394Z" level=info msg="Connect containerd service" Dec 12 17:19:00.966537 containerd[1581]: time="2025-12-12T17:19:00.966473674Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:19:00.967288 containerd[1581]: time="2025-12-12T17:19:00.967227994Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:19:01.038492 containerd[1581]: time="2025-12-12T17:19:01.038360674Z" level=info msg="Start subscribing containerd event" Dec 12 17:19:01.038682 containerd[1581]: time="2025-12-12T17:19:01.038659754Z" level=info msg="Start recovering state" Dec 12 17:19:01.038826 containerd[1581]: time="2025-12-12T17:19:01.038684554Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:19:01.039003 containerd[1581]: time="2025-12-12T17:19:01.038984754Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:19:01.039177 containerd[1581]: time="2025-12-12T17:19:01.038909594Z" level=info msg="Start event monitor" Dec 12 17:19:01.039265 containerd[1581]: time="2025-12-12T17:19:01.039251674Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:19:01.039448 containerd[1581]: time="2025-12-12T17:19:01.039431714Z" level=info msg="Start streaming server" Dec 12 17:19:01.039543 containerd[1581]: time="2025-12-12T17:19:01.039530554Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:19:01.042782 containerd[1581]: time="2025-12-12T17:19:01.042676354Z" level=info msg="runtime interface starting up..." Dec 12 17:19:01.042782 containerd[1581]: time="2025-12-12T17:19:01.042716354Z" level=info msg="starting plugins..." Dec 12 17:19:01.042782 containerd[1581]: time="2025-12-12T17:19:01.042753474Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:19:01.043120 containerd[1581]: time="2025-12-12T17:19:01.043105434Z" level=info msg="containerd successfully booted in 0.097109s" Dec 12 17:19:01.043305 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:19:01.076451 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:19:01.097678 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:19:01.100727 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:19:01.116733 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:19:01.117004 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:19:01.119682 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:19:01.139790 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:19:01.142824 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:19:01.145903 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:19:01.147574 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:19:01.158332 tar[1576]: linux-arm64/README.md Dec 12 17:19:01.184092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:19:02.489667 systemd-networkd[1497]: eth0: Gained IPv6LL Dec 12 17:19:02.492085 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:19:02.494856 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:19:02.499717 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:19:02.504087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:02.516725 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:19:02.541631 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:19:02.543629 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:19:02.543903 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:19:02.547752 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:19:03.095119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:03.098273 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:19:03.098866 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:19:03.099960 systemd[1]: Startup finished in 1.454s (kernel) + 5.292s (initrd) + 4.390s (userspace) = 11.137s. Dec 12 17:19:03.466249 kubelet[1684]: E1212 17:19:03.466175 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:19:03.468717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:19:03.468854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:19:03.469182 systemd[1]: kubelet.service: Consumed 762ms CPU time, 259.4M memory peak. Dec 12 17:19:04.716373 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:19:04.717461 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:37434.service - OpenSSH per-connection server daemon (10.0.0.1:37434). Dec 12 17:19:04.792747 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 37434 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:04.794453 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:04.800517 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:19:04.801491 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:19:04.805714 systemd-logind[1564]: New session 1 of user core. Dec 12 17:19:04.818306 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:19:04.821755 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:19:04.840501 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:19:04.842837 systemd-logind[1564]: New session c1 of user core. Dec 12 17:19:04.958009 systemd[1703]: Queued start job for default target default.target. Dec 12 17:19:04.975549 systemd[1703]: Created slice app.slice - User Application Slice. Dec 12 17:19:04.975581 systemd[1703]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:19:04.975594 systemd[1703]: Reached target paths.target - Paths. Dec 12 17:19:04.975644 systemd[1703]: Reached target timers.target - Timers. Dec 12 17:19:04.976861 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:19:04.977602 systemd[1703]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:19:04.986642 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:19:04.986706 systemd[1703]: Reached target sockets.target - Sockets. Dec 12 17:19:04.987206 systemd[1703]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:19:04.987265 systemd[1703]: Reached target basic.target - Basic System. Dec 12 17:19:04.987319 systemd[1703]: Reached target default.target - Main User Target. Dec 12 17:19:04.987347 systemd[1703]: Startup finished in 138ms. Dec 12 17:19:04.987657 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:19:04.998706 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:19:05.021731 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:37448.service - OpenSSH per-connection server daemon (10.0.0.1:37448). Dec 12 17:19:05.081002 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 37448 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:05.085410 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:05.090308 systemd-logind[1564]: New session 2 of user core. Dec 12 17:19:05.105752 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:19:05.117016 sshd[1719]: Connection closed by 10.0.0.1 port 37448 Dec 12 17:19:05.117481 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 12 17:19:05.129412 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:37448.service: Deactivated successfully. Dec 12 17:19:05.131874 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:19:05.134425 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:19:05.135766 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:37450.service - OpenSSH per-connection server daemon (10.0.0.1:37450). Dec 12 17:19:05.139223 systemd-logind[1564]: Removed session 2. Dec 12 17:19:05.187792 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 37450 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:05.188962 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:05.194259 systemd-logind[1564]: New session 3 of user core. Dec 12 17:19:05.199716 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:19:05.207312 sshd[1728]: Connection closed by 10.0.0.1 port 37450 Dec 12 17:19:05.207651 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 12 17:19:05.220437 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:37450.service: Deactivated successfully. Dec 12 17:19:05.221895 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:19:05.224896 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:19:05.226674 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Dec 12 17:19:05.227314 systemd-logind[1564]: Removed session 3. Dec 12 17:19:05.283845 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:05.285164 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:05.290392 systemd-logind[1564]: New session 4 of user core. Dec 12 17:19:05.305729 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:19:05.316732 sshd[1738]: Connection closed by 10.0.0.1 port 37452 Dec 12 17:19:05.317098 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Dec 12 17:19:05.332529 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:37452.service: Deactivated successfully. Dec 12 17:19:05.334812 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:19:05.335498 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:19:05.337736 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:37466.service - OpenSSH per-connection server daemon (10.0.0.1:37466). Dec 12 17:19:05.338162 systemd-logind[1564]: Removed session 4. Dec 12 17:19:05.394692 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 37466 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:05.395804 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:05.399611 systemd-logind[1564]: New session 5 of user core. Dec 12 17:19:05.405758 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:19:05.425316 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:19:05.425609 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:19:05.453680 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 17:19:05.455462 sshd[1747]: Connection closed by 10.0.0.1 port 37466 Dec 12 17:19:05.455987 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 12 17:19:05.472771 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:37466.service: Deactivated successfully. Dec 12 17:19:05.475932 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:19:05.476738 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:19:05.479249 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:37472.service - OpenSSH per-connection server daemon (10.0.0.1:37472). Dec 12 17:19:05.480221 systemd-logind[1564]: Removed session 5. Dec 12 17:19:05.541673 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 37472 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:05.542907 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:05.548653 systemd-logind[1564]: New session 6 of user core. Dec 12 17:19:05.563721 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:19:05.575167 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:19:05.575429 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:19:05.580944 sudo[1759]: pam_unix(sudo:session): session closed for user root Dec 12 17:19:05.586745 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:19:05.586990 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:19:05.597994 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:19:05.635000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:19:05.636986 augenrules[1781]: No rules Dec 12 17:19:05.638170 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:19:05.638463 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:19:05.638843 kernel: kauditd_printk_skb: 173 callbacks suppressed Dec 12 17:19:05.638870 kernel: audit: type=1305 audit(1765559945.635:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:19:05.638906 kernel: audit: type=1300 audit(1765559945.635:215): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9a37550 a2=420 a3=0 items=0 ppid=1762 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:05.635000 audit[1781]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9a37550 a2=420 a3=0 items=0 ppid=1762 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:05.640880 sudo[1758]: pam_unix(sudo:session): session closed for user root Dec 12 17:19:05.635000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:19:05.644437 kernel: audit: type=1327 audit(1765559945.635:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:19:05.644489 kernel: audit: type=1130 audit(1765559945.635:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.644598 sshd[1757]: Connection closed by 10.0.0.1 port 37472 Dec 12 17:19:05.644972 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 12 17:19:05.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.648943 kernel: audit: type=1131 audit(1765559945.635:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.648984 kernel: audit: type=1106 audit(1765559945.637:218): pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.637000 audit[1758]: USER_END pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.637000 audit[1758]: CRED_DISP pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.653896 kernel: audit: type=1104 audit(1765559945.637:219): pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.653927 kernel: audit: type=1106 audit(1765559945.643:220): pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.643000 audit[1754]: USER_END pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.643000 audit[1754]: CRED_DISP pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.659405 kernel: audit: type=1104 audit(1765559945.643:221): pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.663830 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:37472.service: Deactivated successfully. Dec 12 17:19:05.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.23:22-10.0.0.1:37472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.665754 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:19:05.667379 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:19:05.667523 kernel: audit: type=1131 audit(1765559945.662:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.23:22-10.0.0.1:37472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.670750 systemd-logind[1564]: Removed session 6. Dec 12 17:19:05.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:37476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.671740 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:37476.service - OpenSSH per-connection server daemon (10.0.0.1:37476). Dec 12 17:19:05.730000 audit[1790]: USER_ACCT pid=1790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.732070 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 37476 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:19:05.732000 audit[1790]: CRED_ACQ pid=1790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.732000 audit[1790]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4814c30 a2=3 a3=0 items=0 ppid=1 pid=1790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:05.732000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:19:05.734587 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:19:05.738947 systemd-logind[1564]: New session 7 of user core. Dec 12 17:19:05.757734 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:19:05.758000 audit[1790]: USER_START pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.760000 audit[1793]: CRED_ACQ pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:05.767000 audit[1794]: USER_ACCT pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.769630 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:19:05.768000 audit[1794]: CRED_REFR pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:05.769942 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:19:05.770000 audit[1794]: USER_START pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:06.049407 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:19:06.063817 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:19:06.266480 dockerd[1814]: time="2025-12-12T17:19:06.266059954Z" level=info msg="Starting up" Dec 12 17:19:06.267115 dockerd[1814]: time="2025-12-12T17:19:06.267077434Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:19:06.279353 dockerd[1814]: time="2025-12-12T17:19:06.277906674Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:19:06.464867 dockerd[1814]: time="2025-12-12T17:19:06.464610874Z" level=info msg="Loading containers: start." Dec 12 17:19:06.472529 kernel: Initializing XFRM netlink socket Dec 12 17:19:06.514000 audit[1867]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.514000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe26d0640 a2=0 a3=0 items=0 ppid=1814 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.514000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:19:06.516000 audit[1869]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.516000 audit[1869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd45f1050 a2=0 a3=0 items=0 ppid=1814 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:19:06.518000 audit[1871]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.518000 audit[1871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc47b26b0 a2=0 a3=0 items=0 ppid=1814 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.518000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:19:06.520000 audit[1873]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.520000 audit[1873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2b13940 a2=0 a3=0 items=0 ppid=1814 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.520000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:19:06.521000 audit[1875]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.521000 audit[1875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7c130a0 a2=0 a3=0 items=0 ppid=1814 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:19:06.523000 audit[1877]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.523000 audit[1877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc171cb30 a2=0 a3=0 items=0 ppid=1814 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.523000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:19:06.524000 audit[1879]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.524000 audit[1879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff8d8e500 a2=0 a3=0 items=0 ppid=1814 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.524000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:19:06.526000 audit[1881]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.526000 audit[1881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffde2251c0 a2=0 a3=0 items=0 ppid=1814 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.526000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:19:06.557000 audit[1884]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.557000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffef9a4960 a2=0 a3=0 items=0 ppid=1814 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.557000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:19:06.559000 audit[1886]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.559000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc62c3760 a2=0 a3=0 items=0 ppid=1814 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:19:06.560000 audit[1888]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.560000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffffc632c80 a2=0 a3=0 items=0 ppid=1814 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.560000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:19:06.562000 audit[1890]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.562000 audit[1890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffde01d530 a2=0 a3=0 items=0 ppid=1814 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:19:06.564000 audit[1892]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.564000 audit[1892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe4f61b80 a2=0 a3=0 items=0 ppid=1814 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:19:06.596000 audit[1922]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.596000 audit[1922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd2d0e7d0 a2=0 a3=0 items=0 ppid=1814 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:19:06.598000 audit[1924]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.598000 audit[1924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcd336f30 a2=0 a3=0 items=0 ppid=1814 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:19:06.600000 audit[1926]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.600000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe212c090 a2=0 a3=0 items=0 ppid=1814 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:19:06.602000 audit[1928]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.602000 audit[1928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1608550 a2=0 a3=0 items=0 ppid=1814 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:19:06.604000 audit[1930]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.604000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee1386e0 a2=0 a3=0 items=0 ppid=1814 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:19:06.605000 audit[1932]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.605000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc64aefa0 a2=0 a3=0 items=0 ppid=1814 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.605000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:19:06.607000 audit[1934]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.607000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe387d980 a2=0 a3=0 items=0 ppid=1814 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.607000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:19:06.609000 audit[1936]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.609000 audit[1936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc8bef5c0 a2=0 a3=0 items=0 ppid=1814 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:19:06.611000 audit[1938]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.611000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff5931700 a2=0 a3=0 items=0 ppid=1814 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:19:06.613000 audit[1940]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.613000 audit[1940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd01cd5a0 a2=0 a3=0 items=0 ppid=1814 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.613000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:19:06.615000 audit[1942]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.615000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff5305170 a2=0 a3=0 items=0 ppid=1814 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.615000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:19:06.616000 audit[1944]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.616000 audit[1944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcd773180 a2=0 a3=0 items=0 ppid=1814 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:19:06.618000 audit[1946]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.618000 audit[1946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcc12f2a0 a2=0 a3=0 items=0 ppid=1814 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:19:06.623000 audit[1951]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.623000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc6902b40 a2=0 a3=0 items=0 ppid=1814 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:19:06.626000 audit[1953]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.626000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd4d77f20 a2=0 a3=0 items=0 ppid=1814 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:19:06.627000 audit[1955]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.627000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe7dc7f90 a2=0 a3=0 items=0 ppid=1814 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:19:06.629000 audit[1957]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.629000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffebc42890 a2=0 a3=0 items=0 ppid=1814 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:19:06.631000 audit[1959]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.631000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd134c7d0 a2=0 a3=0 items=0 ppid=1814 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:19:06.633000 audit[1961]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:06.633000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc3c64030 a2=0 a3=0 items=0 ppid=1814 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:19:06.647000 audit[1967]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.647000 audit[1967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffc4b98d0 a2=0 a3=0 items=0 ppid=1814 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:19:06.649000 audit[1969]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.649000 audit[1969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc63b5790 a2=0 a3=0 items=0 ppid=1814 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:19:06.656000 audit[1977]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.656000 audit[1977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffcc27a880 a2=0 a3=0 items=0 ppid=1814 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:19:06.663000 audit[1983]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.663000 audit[1983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffa17aa50 a2=0 a3=0 items=0 ppid=1814 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:19:06.665000 audit[1985]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.665000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff80f4030 a2=0 a3=0 items=0 ppid=1814 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:19:06.667000 audit[1987]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.667000 audit[1987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcc643d00 a2=0 a3=0 items=0 ppid=1814 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:19:06.669000 audit[1989]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.669000 audit[1989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc0742e70 a2=0 a3=0 items=0 ppid=1814 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.669000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:19:06.670000 audit[1991]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:06.670000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd8244470 a2=0 a3=0 items=0 ppid=1814 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:06.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:19:06.672874 systemd-networkd[1497]: docker0: Link UP Dec 12 17:19:06.676985 dockerd[1814]: time="2025-12-12T17:19:06.676918834Z" level=info msg="Loading containers: done." Dec 12 17:19:06.694101 dockerd[1814]: time="2025-12-12T17:19:06.694047514Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:19:06.694250 dockerd[1814]: time="2025-12-12T17:19:06.694129114Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:19:06.694301 dockerd[1814]: time="2025-12-12T17:19:06.694274234Z" level=info msg="Initializing buildkit" Dec 12 17:19:06.714848 dockerd[1814]: time="2025-12-12T17:19:06.714799114Z" level=info msg="Completed buildkit initialization" Dec 12 17:19:06.720861 dockerd[1814]: time="2025-12-12T17:19:06.720753234Z" level=info msg="Daemon has completed initialization" Dec 12 17:19:06.720861 dockerd[1814]: time="2025-12-12T17:19:06.720830074Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:19:06.721263 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:19:06.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:07.337456 containerd[1581]: time="2025-12-12T17:19:07.337400234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:19:08.211177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802027438.mount: Deactivated successfully. Dec 12 17:19:08.755549 containerd[1581]: time="2025-12-12T17:19:08.755209834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:08.756026 containerd[1581]: time="2025-12-12T17:19:08.755969314Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Dec 12 17:19:08.757235 containerd[1581]: time="2025-12-12T17:19:08.757178834Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:08.759869 containerd[1581]: time="2025-12-12T17:19:08.759810234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:08.761216 containerd[1581]: time="2025-12-12T17:19:08.760981634Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.42353976s" Dec 12 17:19:08.761216 containerd[1581]: time="2025-12-12T17:19:08.761016794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:19:08.762371 containerd[1581]: time="2025-12-12T17:19:08.762326634Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:19:10.213774 containerd[1581]: time="2025-12-12T17:19:10.213717034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:10.214762 containerd[1581]: time="2025-12-12T17:19:10.214518754Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 12 17:19:10.215566 containerd[1581]: time="2025-12-12T17:19:10.215531514Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:10.218565 containerd[1581]: time="2025-12-12T17:19:10.218531314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:10.219856 containerd[1581]: time="2025-12-12T17:19:10.219644714Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.45727808s" Dec 12 17:19:10.219856 containerd[1581]: time="2025-12-12T17:19:10.219680354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:19:10.220415 containerd[1581]: time="2025-12-12T17:19:10.220387594Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:19:11.681111 containerd[1581]: time="2025-12-12T17:19:11.681055354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:11.682351 containerd[1581]: time="2025-12-12T17:19:11.682270794Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 12 17:19:11.683187 containerd[1581]: time="2025-12-12T17:19:11.683139914Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:11.686576 containerd[1581]: time="2025-12-12T17:19:11.686471874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:11.688193 containerd[1581]: time="2025-12-12T17:19:11.688139114Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.4677168s" Dec 12 17:19:11.688193 containerd[1581]: time="2025-12-12T17:19:11.688177394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:19:11.688703 containerd[1581]: time="2025-12-12T17:19:11.688656114Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:19:12.650354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415147484.mount: Deactivated successfully. Dec 12 17:19:12.884327 containerd[1581]: time="2025-12-12T17:19:12.884281714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:12.885229 containerd[1581]: time="2025-12-12T17:19:12.884872794Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Dec 12 17:19:12.886216 containerd[1581]: time="2025-12-12T17:19:12.886153274Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:12.887870 containerd[1581]: time="2025-12-12T17:19:12.887837994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:12.888889 containerd[1581]: time="2025-12-12T17:19:12.888641074Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.19991076s" Dec 12 17:19:12.888889 containerd[1581]: time="2025-12-12T17:19:12.888665994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:19:12.889270 containerd[1581]: time="2025-12-12T17:19:12.889235994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:19:13.518337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:19:13.520035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:13.668269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:13.669628 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:19:13.669693 kernel: audit: type=1130 audit(1765559953.666:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:13.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:13.685838 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:19:13.734074 kubelet[2115]: E1212 17:19:13.734021 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:19:13.737458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:19:13.737748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:19:13.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:19:13.738187 systemd[1]: kubelet.service: Consumed 152ms CPU time, 108.2M memory peak. Dec 12 17:19:13.741540 kernel: audit: type=1131 audit(1765559953.736:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:19:13.762550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107408147.mount: Deactivated successfully. Dec 12 17:19:14.380135 containerd[1581]: time="2025-12-12T17:19:14.380050874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:14.382041 containerd[1581]: time="2025-12-12T17:19:14.381761674Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 12 17:19:14.383916 containerd[1581]: time="2025-12-12T17:19:14.383865354Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:14.386943 containerd[1581]: time="2025-12-12T17:19:14.386901234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:14.388228 containerd[1581]: time="2025-12-12T17:19:14.388185514Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.49890472s" Dec 12 17:19:14.388348 containerd[1581]: time="2025-12-12T17:19:14.388333314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:19:14.388808 containerd[1581]: time="2025-12-12T17:19:14.388781394Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:19:14.818911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133679143.mount: Deactivated successfully. Dec 12 17:19:14.826734 containerd[1581]: time="2025-12-12T17:19:14.826673554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:19:14.827807 containerd[1581]: time="2025-12-12T17:19:14.827744914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:19:14.828818 containerd[1581]: time="2025-12-12T17:19:14.828775314Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:19:14.830529 containerd[1581]: time="2025-12-12T17:19:14.830464914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:19:14.831265 containerd[1581]: time="2025-12-12T17:19:14.831207994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 442.39424ms" Dec 12 17:19:14.831265 containerd[1581]: time="2025-12-12T17:19:14.831243634Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:19:14.832029 containerd[1581]: time="2025-12-12T17:19:14.831729754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:19:15.768613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744459443.mount: Deactivated successfully. Dec 12 17:19:17.604473 containerd[1581]: time="2025-12-12T17:19:17.604418154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:17.605539 containerd[1581]: time="2025-12-12T17:19:17.605466194Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Dec 12 17:19:17.606519 containerd[1581]: time="2025-12-12T17:19:17.606394514Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:17.610265 containerd[1581]: time="2025-12-12T17:19:17.609622834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:17.611025 containerd[1581]: time="2025-12-12T17:19:17.610887474Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.7791216s" Dec 12 17:19:17.611025 containerd[1581]: time="2025-12-12T17:19:17.610925994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:19:23.768222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:19:23.771395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:23.929740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:23.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:23.939710 kernel: audit: type=1130 audit(1765559963.928:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:23.943879 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:19:23.988495 kubelet[2266]: E1212 17:19:23.988416 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:19:23.991480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:19:23.991648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:19:23.993704 systemd[1]: kubelet.service: Consumed 155ms CPU time, 107.2M memory peak. Dec 12 17:19:23.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:19:23.997547 kernel: audit: type=1131 audit(1765559963.990:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:19:24.200907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:24.201073 systemd[1]: kubelet.service: Consumed 155ms CPU time, 107.2M memory peak. Dec 12 17:19:24.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:24.203264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:24.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:24.207666 kernel: audit: type=1130 audit(1765559964.199:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:24.207763 kernel: audit: type=1131 audit(1765559964.199:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:24.231264 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... Dec 12 17:19:24.231284 systemd[1]: Reloading... Dec 12 17:19:24.326534 zram_generator::config[2335]: No configuration found. Dec 12 17:19:24.738853 systemd[1]: Reloading finished in 507 ms. Dec 12 17:19:24.761000 audit: BPF prog-id=61 op=LOAD Dec 12 17:19:24.765608 kernel: audit: type=1334 audit(1765559964.761:279): prog-id=61 op=LOAD Dec 12 17:19:24.765706 kernel: audit: type=1334 audit(1765559964.761:280): prog-id=47 op=UNLOAD Dec 12 17:19:24.765726 kernel: audit: type=1334 audit(1765559964.762:281): prog-id=62 op=LOAD Dec 12 17:19:24.765754 kernel: audit: type=1334 audit(1765559964.763:282): prog-id=63 op=LOAD Dec 12 17:19:24.761000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:19:24.762000 audit: BPF prog-id=62 op=LOAD Dec 12 17:19:24.763000 audit: BPF prog-id=63 op=LOAD Dec 12 17:19:24.763000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:19:24.767038 kernel: audit: type=1334 audit(1765559964.763:283): prog-id=48 op=UNLOAD Dec 12 17:19:24.767081 kernel: audit: type=1334 audit(1765559964.763:284): prog-id=49 op=UNLOAD Dec 12 17:19:24.763000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:19:24.763000 audit: BPF prog-id=64 op=LOAD Dec 12 17:19:24.763000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:19:24.764000 audit: BPF prog-id=65 op=LOAD Dec 12 17:19:24.765000 audit: BPF prog-id=66 op=LOAD Dec 12 17:19:24.765000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:19:24.765000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:19:24.766000 audit: BPF prog-id=67 op=LOAD Dec 12 17:19:24.766000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:19:24.766000 audit: BPF prog-id=68 op=LOAD Dec 12 17:19:24.776000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:19:24.776000 audit: BPF prog-id=69 op=LOAD Dec 12 17:19:24.776000 audit: BPF prog-id=70 op=LOAD Dec 12 17:19:24.776000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:19:24.776000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:19:24.777000 audit: BPF prog-id=71 op=LOAD Dec 12 17:19:24.777000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:19:24.778000 audit: BPF prog-id=72 op=LOAD Dec 12 17:19:24.778000 audit: BPF prog-id=73 op=LOAD Dec 12 17:19:24.778000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:19:24.778000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:19:24.778000 audit: BPF prog-id=74 op=LOAD Dec 12 17:19:24.778000 audit: BPF prog-id=75 op=LOAD Dec 12 17:19:24.778000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:19:24.778000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:19:24.778000 audit: BPF prog-id=76 op=LOAD Dec 12 17:19:24.778000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:19:24.779000 audit: BPF prog-id=77 op=LOAD Dec 12 17:19:24.779000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:19:24.781000 audit: BPF prog-id=78 op=LOAD Dec 12 17:19:24.781000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:19:24.781000 audit: BPF prog-id=79 op=LOAD Dec 12 17:19:24.781000 audit: BPF prog-id=80 op=LOAD Dec 12 17:19:24.781000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:19:24.781000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:19:24.802172 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:19:24.802272 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:19:24.802597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:24.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:19:24.802660 systemd[1]: kubelet.service: Consumed 104ms CPU time, 94.9M memory peak. Dec 12 17:19:24.804500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:24.966262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:24.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:24.979845 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:19:25.015090 kubelet[2374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:19:25.015090 kubelet[2374]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:19:25.015090 kubelet[2374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:19:25.015090 kubelet[2374]: I1212 17:19:25.015042 2374 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:19:26.109466 kubelet[2374]: I1212 17:19:26.109408 2374 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:19:26.109466 kubelet[2374]: I1212 17:19:26.109447 2374 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:19:26.109844 kubelet[2374]: I1212 17:19:26.109817 2374 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:19:26.132521 kubelet[2374]: E1212 17:19:26.132470 2374 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:19:26.133224 kubelet[2374]: I1212 17:19:26.133209 2374 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:19:26.140014 kubelet[2374]: I1212 17:19:26.139974 2374 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:19:26.143378 kubelet[2374]: I1212 17:19:26.143349 2374 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:19:26.143761 kubelet[2374]: I1212 17:19:26.143726 2374 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:19:26.143938 kubelet[2374]: I1212 17:19:26.143763 2374 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:19:26.144030 kubelet[2374]: I1212 17:19:26.144018 2374 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:19:26.144030 kubelet[2374]: I1212 17:19:26.144028 2374 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:19:26.144286 kubelet[2374]: I1212 17:19:26.144265 2374 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:19:26.147190 kubelet[2374]: I1212 17:19:26.147163 2374 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:19:26.147243 kubelet[2374]: I1212 17:19:26.147207 2374 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:19:26.147243 kubelet[2374]: I1212 17:19:26.147240 2374 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:19:26.148283 kubelet[2374]: I1212 17:19:26.148246 2374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:19:26.149408 kubelet[2374]: I1212 17:19:26.149388 2374 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:19:26.150144 kubelet[2374]: I1212 17:19:26.150109 2374 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:19:26.150289 kubelet[2374]: W1212 17:19:26.150254 2374 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:19:26.153392 kubelet[2374]: E1212 17:19:26.153353 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:19:26.153392 kubelet[2374]: I1212 17:19:26.153380 2374 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:19:26.153470 kubelet[2374]: I1212 17:19:26.153424 2374 server.go:1289] "Started kubelet" Dec 12 17:19:26.153569 kubelet[2374]: E1212 17:19:26.153539 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:19:26.153604 kubelet[2374]: I1212 17:19:26.153583 2374 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:19:26.158697 kubelet[2374]: I1212 17:19:26.158649 2374 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:19:26.159772 kubelet[2374]: I1212 17:19:26.159651 2374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:19:26.160082 kubelet[2374]: I1212 17:19:26.160057 2374 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:19:26.162611 kubelet[2374]: I1212 17:19:26.162586 2374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:19:26.163230 kubelet[2374]: E1212 17:19:26.162065 2374 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880876f54394dda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:19:26.153395674 +0000 UTC m=+1.170022841,LastTimestamp:2025-12-12 17:19:26.153395674 +0000 UTC m=+1.170022841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:19:26.164083 kubelet[2374]: I1212 17:19:26.163692 2374 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:19:26.164083 kubelet[2374]: E1212 17:19:26.163735 2374 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:19:26.164556 kubelet[2374]: I1212 17:19:26.164538 2374 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:19:26.164690 kubelet[2374]: E1212 17:19:26.164669 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:19:26.164794 kubelet[2374]: I1212 17:19:26.164775 2374 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:19:26.165006 kubelet[2374]: I1212 17:19:26.164991 2374 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:19:26.165935 kubelet[2374]: E1212 17:19:26.165772 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Dec 12 17:19:26.165935 kubelet[2374]: E1212 17:19:26.165891 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:19:26.166213 kubelet[2374]: I1212 17:19:26.166167 2374 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:19:26.166496 kubelet[2374]: I1212 17:19:26.166476 2374 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:19:26.167803 kubelet[2374]: I1212 17:19:26.167784 2374 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:19:26.169000 audit[2391]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.169000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff44610f0 a2=0 a3=0 items=0 ppid=2374 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:19:26.170000 audit[2392]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.170000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4171d90 a2=0 a3=0 items=0 ppid=2374 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:19:26.172000 audit[2394]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.172000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd34d1430 a2=0 a3=0 items=0 ppid=2374 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:19:26.174000 audit[2396]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.174000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe603ac90 a2=0 a3=0 items=0 ppid=2374 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:19:26.182309 kubelet[2374]: I1212 17:19:26.182258 2374 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:19:26.182309 kubelet[2374]: I1212 17:19:26.182291 2374 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:19:26.182309 kubelet[2374]: I1212 17:19:26.182313 2374 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:19:26.180000 audit[2403]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.180000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc081de50 a2=0 a3=0 items=0 ppid=2374 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:19:26.183490 kubelet[2374]: I1212 17:19:26.183429 2374 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:19:26.182000 audit[2406]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.182000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffdddf3b0 a2=0 a3=0 items=0 ppid=2374 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.183000 audit[2405]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:26.183000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdfe93240 a2=0 a3=0 items=0 ppid=2374 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:19:26.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:19:26.185053 kubelet[2374]: I1212 17:19:26.184904 2374 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:19:26.185053 kubelet[2374]: I1212 17:19:26.184929 2374 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:19:26.185053 kubelet[2374]: I1212 17:19:26.184949 2374 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:19:26.185053 kubelet[2374]: I1212 17:19:26.184957 2374 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:19:26.185053 kubelet[2374]: E1212 17:19:26.185010 2374 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:19:26.185592 kubelet[2374]: E1212 17:19:26.185555 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:19:26.184000 audit[2408]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:26.184000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf3b2a00 a2=0 a3=0 items=0 ppid=2374 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:19:26.185000 audit[2409]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.185000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc67aa0b0 a2=0 a3=0 items=0 ppid=2374 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:19:26.186000 audit[2410]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:26.186000 audit[2410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb0ffde0 a2=0 a3=0 items=0 ppid=2374 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:19:26.186000 audit[2411]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:26.186000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfebbf00 a2=0 a3=0 items=0 ppid=2374 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:19:26.187000 audit[2412]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:26.187000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6b63e20 a2=0 a3=0 items=0 ppid=2374 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:19:26.264912 kubelet[2374]: E1212 17:19:26.264858 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:19:26.279666 kubelet[2374]: I1212 17:19:26.279617 2374 policy_none.go:49] "None policy: Start" Dec 12 17:19:26.279666 kubelet[2374]: I1212 17:19:26.279655 2374 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:19:26.279666 kubelet[2374]: I1212 17:19:26.279675 2374 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:19:26.285196 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:19:26.285610 kubelet[2374]: E1212 17:19:26.285225 2374 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:19:26.302860 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:19:26.317353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:19:26.319354 kubelet[2374]: E1212 17:19:26.318842 2374 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:19:26.319354 kubelet[2374]: I1212 17:19:26.319055 2374 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:19:26.319354 kubelet[2374]: I1212 17:19:26.319068 2374 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:19:26.319354 kubelet[2374]: I1212 17:19:26.319281 2374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:19:26.320725 kubelet[2374]: E1212 17:19:26.320693 2374 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:19:26.320784 kubelet[2374]: E1212 17:19:26.320741 2374 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:19:26.366582 kubelet[2374]: E1212 17:19:26.366437 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Dec 12 17:19:26.420792 kubelet[2374]: I1212 17:19:26.420733 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:19:26.421282 kubelet[2374]: E1212 17:19:26.421244 2374 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 12 17:19:26.496429 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 17:19:26.519057 kubelet[2374]: E1212 17:19:26.519001 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:26.522124 systemd[1]: Created slice kubepods-burstable-pod7eda471f4e21075ef66e14e0f10dfcb3.slice - libcontainer container kubepods-burstable-pod7eda471f4e21075ef66e14e0f10dfcb3.slice. Dec 12 17:19:26.524216 kubelet[2374]: E1212 17:19:26.524173 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:26.525849 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 17:19:26.527599 kubelet[2374]: E1212 17:19:26.527572 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:26.566820 kubelet[2374]: I1212 17:19:26.566772 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:26.566820 kubelet[2374]: I1212 17:19:26.566814 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:26.566820 kubelet[2374]: I1212 17:19:26.566834 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:26.566970 kubelet[2374]: I1212 17:19:26.566857 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7eda471f4e21075ef66e14e0f10dfcb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eda471f4e21075ef66e14e0f10dfcb3\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:26.566970 kubelet[2374]: I1212 17:19:26.566872 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7eda471f4e21075ef66e14e0f10dfcb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eda471f4e21075ef66e14e0f10dfcb3\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:26.566970 kubelet[2374]: I1212 17:19:26.566886 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:26.566970 kubelet[2374]: I1212 17:19:26.566904 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:26.566970 kubelet[2374]: I1212 17:19:26.566919 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7eda471f4e21075ef66e14e0f10dfcb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7eda471f4e21075ef66e14e0f10dfcb3\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:26.567095 kubelet[2374]: I1212 17:19:26.566934 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:26.623427 kubelet[2374]: I1212 17:19:26.623007 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:19:26.623427 kubelet[2374]: E1212 17:19:26.623322 2374 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 12 17:19:26.766949 kubelet[2374]: E1212 17:19:26.766892 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Dec 12 17:19:26.820338 kubelet[2374]: E1212 17:19:26.820294 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:26.821022 containerd[1581]: time="2025-12-12T17:19:26.820985634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:19:26.825284 kubelet[2374]: E1212 17:19:26.825249 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:26.825736 containerd[1581]: time="2025-12-12T17:19:26.825697114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7eda471f4e21075ef66e14e0f10dfcb3,Namespace:kube-system,Attempt:0,}" Dec 12 17:19:26.829052 kubelet[2374]: E1212 17:19:26.829026 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:26.829871 containerd[1581]: time="2025-12-12T17:19:26.829813954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 17:19:26.850279 containerd[1581]: time="2025-12-12T17:19:26.850216154Z" level=info msg="connecting to shim 85e27c49faee414c9c1439869605e69fe21aff588cfc7c773eba95ed6956ee2e" address="unix:///run/containerd/s/663b68778230782f1641c87eb7578565d3b07e42884647fc04b7a17dda33b7f7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:26.868107 containerd[1581]: time="2025-12-12T17:19:26.868051434Z" level=info msg="connecting to shim 3043e28ab452be3b62e721906acf84592843e18b3807190129f7133765e34adc" address="unix:///run/containerd/s/fc33453cdd46868adf3a9c0c58641b459438c62ed42898a6f3065f7ecb73eaa0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:26.868489 containerd[1581]: time="2025-12-12T17:19:26.868411674Z" level=info msg="connecting to shim 3cc0f1726e3e4036d163ed7b97146240002b0ac78fee0f9d59c813483eda9202" address="unix:///run/containerd/s/b007773be112a821595cd5dff7e277388c64aaca43da0387da1b7963eb3a923d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:26.887833 systemd[1]: Started cri-containerd-85e27c49faee414c9c1439869605e69fe21aff588cfc7c773eba95ed6956ee2e.scope - libcontainer container 85e27c49faee414c9c1439869605e69fe21aff588cfc7c773eba95ed6956ee2e. Dec 12 17:19:26.892786 systemd[1]: Started cri-containerd-3043e28ab452be3b62e721906acf84592843e18b3807190129f7133765e34adc.scope - libcontainer container 3043e28ab452be3b62e721906acf84592843e18b3807190129f7133765e34adc. Dec 12 17:19:26.902000 audit: BPF prog-id=81 op=LOAD Dec 12 17:19:26.903000 audit: BPF prog-id=82 op=LOAD Dec 12 17:19:26.903000 audit[2439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.903000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:19:26.903000 audit[2439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.903000 audit: BPF prog-id=83 op=LOAD Dec 12 17:19:26.903000 audit[2439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.904000 audit: BPF prog-id=84 op=LOAD Dec 12 17:19:26.904000 audit[2439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.904000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:19:26.904000 audit[2439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.904000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:19:26.904000 audit[2439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.904000 audit: BPF prog-id=85 op=LOAD Dec 12 17:19:26.904000 audit[2439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2422 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653237633439666165653431346339633134333938363936303565 Dec 12 17:19:26.915860 systemd[1]: Started cri-containerd-3cc0f1726e3e4036d163ed7b97146240002b0ac78fee0f9d59c813483eda9202.scope - libcontainer container 3cc0f1726e3e4036d163ed7b97146240002b0ac78fee0f9d59c813483eda9202. Dec 12 17:19:26.916000 audit: BPF prog-id=86 op=LOAD Dec 12 17:19:26.916000 audit: BPF prog-id=87 op=LOAD Dec 12 17:19:26.916000 audit[2482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.917000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:19:26.917000 audit[2482]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.917000 audit: BPF prog-id=88 op=LOAD Dec 12 17:19:26.917000 audit[2482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.918000 audit: BPF prog-id=89 op=LOAD Dec 12 17:19:26.918000 audit[2482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.918000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:19:26.918000 audit[2482]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.918000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:19:26.918000 audit[2482]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.918000 audit: BPF prog-id=90 op=LOAD Dec 12 17:19:26.918000 audit[2482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2457 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330343365323861623435326265336236326537323139303661636638 Dec 12 17:19:26.930000 audit: BPF prog-id=91 op=LOAD Dec 12 17:19:26.931000 audit: BPF prog-id=92 op=LOAD Dec 12 17:19:26.931000 audit[2486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.932000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:19:26.932000 audit[2486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.932000 audit: BPF prog-id=93 op=LOAD Dec 12 17:19:26.932000 audit[2486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.933000 audit: BPF prog-id=94 op=LOAD Dec 12 17:19:26.933000 audit[2486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.936549 containerd[1581]: time="2025-12-12T17:19:26.936483434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"85e27c49faee414c9c1439869605e69fe21aff588cfc7c773eba95ed6956ee2e\"" Dec 12 17:19:26.935000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:19:26.935000 audit[2486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.935000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:19:26.935000 audit[2486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.935000 audit: BPF prog-id=95 op=LOAD Dec 12 17:19:26.935000 audit[2486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2455 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:26.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633066313732366533653430333664313633656437623937313436 Dec 12 17:19:26.939419 kubelet[2374]: E1212 17:19:26.939379 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:26.951175 containerd[1581]: time="2025-12-12T17:19:26.951120954Z" level=info msg="CreateContainer within sandbox \"85e27c49faee414c9c1439869605e69fe21aff588cfc7c773eba95ed6956ee2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:19:26.962092 containerd[1581]: time="2025-12-12T17:19:26.962031634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3043e28ab452be3b62e721906acf84592843e18b3807190129f7133765e34adc\"" Dec 12 17:19:26.963189 kubelet[2374]: E1212 17:19:26.963158 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:26.965448 containerd[1581]: time="2025-12-12T17:19:26.965379474Z" level=info msg="Container 473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:26.967940 containerd[1581]: time="2025-12-12T17:19:26.967892594Z" level=info msg="CreateContainer within sandbox \"3043e28ab452be3b62e721906acf84592843e18b3807190129f7133765e34adc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:19:26.969984 containerd[1581]: time="2025-12-12T17:19:26.969936274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7eda471f4e21075ef66e14e0f10dfcb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cc0f1726e3e4036d163ed7b97146240002b0ac78fee0f9d59c813483eda9202\"" Dec 12 17:19:26.971065 kubelet[2374]: E1212 17:19:26.971037 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:26.976023 containerd[1581]: time="2025-12-12T17:19:26.975931554Z" level=info msg="CreateContainer within sandbox \"3cc0f1726e3e4036d163ed7b97146240002b0ac78fee0f9d59c813483eda9202\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:19:26.976317 containerd[1581]: time="2025-12-12T17:19:26.976285394Z" level=info msg="CreateContainer within sandbox \"85e27c49faee414c9c1439869605e69fe21aff588cfc7c773eba95ed6956ee2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31\"" Dec 12 17:19:26.977040 containerd[1581]: time="2025-12-12T17:19:26.977016634Z" level=info msg="StartContainer for \"473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31\"" Dec 12 17:19:26.978166 containerd[1581]: time="2025-12-12T17:19:26.978121474Z" level=info msg="connecting to shim 473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31" address="unix:///run/containerd/s/663b68778230782f1641c87eb7578565d3b07e42884647fc04b7a17dda33b7f7" protocol=ttrpc version=3 Dec 12 17:19:26.980555 containerd[1581]: time="2025-12-12T17:19:26.980420954Z" level=info msg="Container 6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:26.991227 containerd[1581]: time="2025-12-12T17:19:26.991162874Z" level=info msg="CreateContainer within sandbox \"3043e28ab452be3b62e721906acf84592843e18b3807190129f7133765e34adc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142\"" Dec 12 17:19:26.992577 containerd[1581]: time="2025-12-12T17:19:26.991793834Z" level=info msg="StartContainer for \"6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142\"" Dec 12 17:19:26.993411 containerd[1581]: time="2025-12-12T17:19:26.993241354Z" level=info msg="connecting to shim 6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142" address="unix:///run/containerd/s/fc33453cdd46868adf3a9c0c58641b459438c62ed42898a6f3065f7ecb73eaa0" protocol=ttrpc version=3 Dec 12 17:19:26.994016 containerd[1581]: time="2025-12-12T17:19:26.993966754Z" level=info msg="Container 23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:26.995787 systemd[1]: Started cri-containerd-473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31.scope - libcontainer container 473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31. Dec 12 17:19:27.003564 containerd[1581]: time="2025-12-12T17:19:27.002440874Z" level=info msg="CreateContainer within sandbox \"3cc0f1726e3e4036d163ed7b97146240002b0ac78fee0f9d59c813483eda9202\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999\"" Dec 12 17:19:27.008537 containerd[1581]: time="2025-12-12T17:19:27.005696554Z" level=info msg="StartContainer for \"23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999\"" Dec 12 17:19:27.008537 containerd[1581]: time="2025-12-12T17:19:27.006886114Z" level=info msg="connecting to shim 23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999" address="unix:///run/containerd/s/b007773be112a821595cd5dff7e277388c64aaca43da0387da1b7963eb3a923d" protocol=ttrpc version=3 Dec 12 17:19:27.016893 systemd[1]: Started cri-containerd-6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142.scope - libcontainer container 6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142. Dec 12 17:19:27.017000 audit: BPF prog-id=96 op=LOAD Dec 12 17:19:27.019000 audit: BPF prog-id=97 op=LOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.019000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.019000 audit: BPF prog-id=98 op=LOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.019000 audit: BPF prog-id=99 op=LOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.019000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.019000 audit: BPF prog-id=98 op=UNLOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.019000 audit: BPF prog-id=100 op=LOAD Dec 12 17:19:27.019000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336361616365363436643631356661326566363166613366346238 Dec 12 17:19:27.025933 kubelet[2374]: I1212 17:19:27.025698 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:19:27.026669 kubelet[2374]: E1212 17:19:27.026309 2374 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 12 17:19:27.033767 systemd[1]: Started cri-containerd-23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999.scope - libcontainer container 23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999. Dec 12 17:19:27.035000 audit: BPF prog-id=101 op=LOAD Dec 12 17:19:27.036000 audit: BPF prog-id=102 op=LOAD Dec 12 17:19:27.036000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.036000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:19:27.036000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.039000 audit: BPF prog-id=103 op=LOAD Dec 12 17:19:27.039000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.039000 audit: BPF prog-id=104 op=LOAD Dec 12 17:19:27.039000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.039000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:19:27.039000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.039000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:19:27.039000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.039000 audit: BPF prog-id=105 op=LOAD Dec 12 17:19:27.039000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2457 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313064643364393639383738333664303263343034633564323331 Dec 12 17:19:27.047000 audit: BPF prog-id=106 op=LOAD Dec 12 17:19:27.048000 audit: BPF prog-id=107 op=LOAD Dec 12 17:19:27.048000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.049000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:19:27.049000 audit[2585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.049000 audit: BPF prog-id=108 op=LOAD Dec 12 17:19:27.049000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.050000 audit: BPF prog-id=109 op=LOAD Dec 12 17:19:27.050000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.050000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:19:27.050000 audit[2585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.050000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:19:27.050000 audit[2585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.050000 audit: BPF prog-id=110 op=LOAD Dec 12 17:19:27.050000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:27.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233656435393338333031653666326263393832663763346566663834 Dec 12 17:19:27.062885 containerd[1581]: time="2025-12-12T17:19:27.062835514Z" level=info msg="StartContainer for \"473caace646d615fa2ef61fa3f4b8c67e7eac4b330605f719801acf7ea858e31\" returns successfully" Dec 12 17:19:27.078960 kubelet[2374]: E1212 17:19:27.078814 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:19:27.081442 containerd[1581]: time="2025-12-12T17:19:27.081338834Z" level=info msg="StartContainer for \"6410dd3d96987836d02c404c5d2316f772d6bbba3d25cd217d0fbb23e7231142\" returns successfully" Dec 12 17:19:27.095049 containerd[1581]: time="2025-12-12T17:19:27.095000754Z" level=info msg="StartContainer for \"23ed5938301e6f2bc982f7c4eff842d9a16f97cf5de0d1b7bf7108ff4c789999\" returns successfully" Dec 12 17:19:27.192651 kubelet[2374]: E1212 17:19:27.192357 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:27.192651 kubelet[2374]: E1212 17:19:27.192484 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:27.196284 kubelet[2374]: E1212 17:19:27.196250 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:27.196456 kubelet[2374]: E1212 17:19:27.196394 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:27.198534 kubelet[2374]: E1212 17:19:27.198272 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:27.198534 kubelet[2374]: E1212 17:19:27.198377 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:27.828486 kubelet[2374]: I1212 17:19:27.828441 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:19:28.200134 kubelet[2374]: E1212 17:19:28.200039 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:28.200502 kubelet[2374]: E1212 17:19:28.200173 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:28.201061 kubelet[2374]: E1212 17:19:28.200890 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:19:28.201061 kubelet[2374]: E1212 17:19:28.201008 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:28.866232 kubelet[2374]: E1212 17:19:28.866181 2374 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:19:28.927441 kubelet[2374]: I1212 17:19:28.927382 2374 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:19:28.927441 kubelet[2374]: E1212 17:19:28.927429 2374 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:19:28.937544 kubelet[2374]: E1212 17:19:28.937454 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:19:29.038491 kubelet[2374]: E1212 17:19:29.038269 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:19:29.153636 kubelet[2374]: I1212 17:19:29.153522 2374 apiserver.go:52] "Watching apiserver" Dec 12 17:19:29.165701 kubelet[2374]: I1212 17:19:29.165613 2374 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:19:29.165701 kubelet[2374]: I1212 17:19:29.165686 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:29.171115 kubelet[2374]: E1212 17:19:29.171072 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:29.171115 kubelet[2374]: I1212 17:19:29.171111 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:29.173349 kubelet[2374]: E1212 17:19:29.173060 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:29.173349 kubelet[2374]: I1212 17:19:29.173094 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:29.175041 kubelet[2374]: E1212 17:19:29.175011 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:29.379433 kubelet[2374]: I1212 17:19:29.379383 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:29.382028 kubelet[2374]: E1212 17:19:29.382001 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:29.382312 kubelet[2374]: E1212 17:19:29.382276 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:30.913630 systemd[1]: Reload requested from client PID 2666 ('systemctl') (unit session-7.scope)... Dec 12 17:19:30.913649 systemd[1]: Reloading... Dec 12 17:19:31.001552 zram_generator::config[2715]: No configuration found. Dec 12 17:19:31.182560 systemd[1]: Reloading finished in 268 ms. Dec 12 17:19:31.215501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:31.233571 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:19:31.233886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:31.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:31.233982 systemd[1]: kubelet.service: Consumed 1.572s CPU time, 128.3M memory peak. Dec 12 17:19:31.234540 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 12 17:19:31.234602 kernel: audit: type=1131 audit(1765559971.232:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:31.238231 kernel: audit: type=1334 audit(1765559971.235:382): prog-id=111 op=LOAD Dec 12 17:19:31.238320 kernel: audit: type=1334 audit(1765559971.235:383): prog-id=61 op=UNLOAD Dec 12 17:19:31.238349 kernel: audit: type=1334 audit(1765559971.235:384): prog-id=112 op=LOAD Dec 12 17:19:31.235000 audit: BPF prog-id=111 op=LOAD Dec 12 17:19:31.235000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:19:31.235000 audit: BPF prog-id=112 op=LOAD Dec 12 17:19:31.236000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:19:31.235000 audit: BPF prog-id=113 op=LOAD Dec 12 17:19:31.239631 kernel: audit: type=1334 audit(1765559971.235:385): prog-id=113 op=LOAD Dec 12 17:19:31.239695 kernel: audit: type=1334 audit(1765559971.235:386): prog-id=62 op=UNLOAD Dec 12 17:19:31.235000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:19:31.235000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:19:31.241202 kernel: audit: type=1334 audit(1765559971.235:387): prog-id=63 op=UNLOAD Dec 12 17:19:31.241257 kernel: audit: type=1334 audit(1765559971.238:388): prog-id=114 op=LOAD Dec 12 17:19:31.241281 kernel: audit: type=1334 audit(1765559971.238:389): prog-id=71 op=UNLOAD Dec 12 17:19:31.241300 kernel: audit: type=1334 audit(1765559971.238:390): prog-id=115 op=LOAD Dec 12 17:19:31.238000 audit: BPF prog-id=114 op=LOAD Dec 12 17:19:31.238000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:19:31.238000 audit: BPF prog-id=115 op=LOAD Dec 12 17:19:31.239000 audit: BPF prog-id=116 op=LOAD Dec 12 17:19:31.239000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:19:31.239000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:19:31.241000 audit: BPF prog-id=117 op=LOAD Dec 12 17:19:31.241000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:19:31.241000 audit: BPF prog-id=118 op=LOAD Dec 12 17:19:31.241000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:19:31.242000 audit: BPF prog-id=119 op=LOAD Dec 12 17:19:31.242000 audit: BPF prog-id=120 op=LOAD Dec 12 17:19:31.242000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:19:31.242000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:19:31.242000 audit: BPF prog-id=121 op=LOAD Dec 12 17:19:31.242000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:19:31.243000 audit: BPF prog-id=122 op=LOAD Dec 12 17:19:31.243000 audit: BPF prog-id=123 op=LOAD Dec 12 17:19:31.243000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:19:31.243000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:19:31.243000 audit: BPF prog-id=124 op=LOAD Dec 12 17:19:31.249000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:19:31.250000 audit: BPF prog-id=125 op=LOAD Dec 12 17:19:31.250000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:19:31.250000 audit: BPF prog-id=126 op=LOAD Dec 12 17:19:31.250000 audit: BPF prog-id=127 op=LOAD Dec 12 17:19:31.250000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:19:31.250000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:19:31.251000 audit: BPF prog-id=128 op=LOAD Dec 12 17:19:31.251000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:19:31.251000 audit: BPF prog-id=129 op=LOAD Dec 12 17:19:31.251000 audit: BPF prog-id=130 op=LOAD Dec 12 17:19:31.251000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:19:31.251000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:19:31.408989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:19:31.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:31.413295 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:19:31.451431 kubelet[2754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:19:31.451431 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:19:31.451431 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:19:31.451431 kubelet[2754]: I1212 17:19:31.451398 2754 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:19:31.458534 kubelet[2754]: I1212 17:19:31.458453 2754 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:19:31.458534 kubelet[2754]: I1212 17:19:31.458489 2754 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:19:31.459776 kubelet[2754]: I1212 17:19:31.459730 2754 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:19:31.461314 kubelet[2754]: I1212 17:19:31.461293 2754 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:19:31.463712 kubelet[2754]: I1212 17:19:31.463681 2754 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:19:31.467689 kubelet[2754]: I1212 17:19:31.467648 2754 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:19:31.470526 kubelet[2754]: I1212 17:19:31.470442 2754 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:19:31.470693 kubelet[2754]: I1212 17:19:31.470665 2754 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:19:31.470838 kubelet[2754]: I1212 17:19:31.470695 2754 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:19:31.470913 kubelet[2754]: I1212 17:19:31.470849 2754 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:19:31.470913 kubelet[2754]: I1212 17:19:31.470859 2754 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:19:31.470913 kubelet[2754]: I1212 17:19:31.470901 2754 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:19:31.471073 kubelet[2754]: I1212 17:19:31.471058 2754 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:19:31.471102 kubelet[2754]: I1212 17:19:31.471076 2754 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:19:31.472011 kubelet[2754]: I1212 17:19:31.471995 2754 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:19:31.472041 kubelet[2754]: I1212 17:19:31.472021 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:19:31.473816 kubelet[2754]: I1212 17:19:31.473786 2754 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:19:31.474366 kubelet[2754]: I1212 17:19:31.474345 2754 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:19:31.479409 kubelet[2754]: I1212 17:19:31.478126 2754 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:19:31.479525 kubelet[2754]: I1212 17:19:31.479446 2754 server.go:1289] "Started kubelet" Dec 12 17:19:31.483308 kubelet[2754]: I1212 17:19:31.482869 2754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:19:31.483308 kubelet[2754]: I1212 17:19:31.483135 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:19:31.485095 kubelet[2754]: I1212 17:19:31.485069 2754 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:19:31.485586 kubelet[2754]: I1212 17:19:31.485538 2754 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:19:31.486142 kubelet[2754]: I1212 17:19:31.486114 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:19:31.487393 kubelet[2754]: I1212 17:19:31.487374 2754 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:19:31.490135 kubelet[2754]: I1212 17:19:31.490111 2754 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:19:31.490386 kubelet[2754]: E1212 17:19:31.490363 2754 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:19:31.490621 kubelet[2754]: I1212 17:19:31.490602 2754 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:19:31.493101 kubelet[2754]: I1212 17:19:31.492634 2754 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:19:31.495779 kubelet[2754]: I1212 17:19:31.495746 2754 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:19:31.499929 kubelet[2754]: E1212 17:19:31.499882 2754 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:19:31.500087 kubelet[2754]: I1212 17:19:31.500069 2754 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:19:31.500148 kubelet[2754]: I1212 17:19:31.500139 2754 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:19:31.508527 kubelet[2754]: I1212 17:19:31.507818 2754 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:19:31.509572 kubelet[2754]: I1212 17:19:31.509546 2754 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:19:31.509572 kubelet[2754]: I1212 17:19:31.509573 2754 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:19:31.509654 kubelet[2754]: I1212 17:19:31.509594 2754 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:19:31.509654 kubelet[2754]: I1212 17:19:31.509604 2754 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:19:31.509694 kubelet[2754]: E1212 17:19:31.509649 2754 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:19:31.542586 kubelet[2754]: I1212 17:19:31.542546 2754 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:19:31.542801 kubelet[2754]: I1212 17:19:31.542783 2754 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:19:31.542864 kubelet[2754]: I1212 17:19:31.542856 2754 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:19:31.543060 kubelet[2754]: I1212 17:19:31.543036 2754 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:19:31.543135 kubelet[2754]: I1212 17:19:31.543111 2754 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:19:31.543195 kubelet[2754]: I1212 17:19:31.543185 2754 policy_none.go:49] "None policy: Start" Dec 12 17:19:31.543263 kubelet[2754]: I1212 17:19:31.543253 2754 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:19:31.543320 kubelet[2754]: I1212 17:19:31.543312 2754 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:19:31.543489 kubelet[2754]: I1212 17:19:31.543473 2754 state_mem.go:75] "Updated machine memory state" Dec 12 17:19:31.548903 kubelet[2754]: E1212 17:19:31.548863 2754 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:19:31.549078 kubelet[2754]: I1212 17:19:31.549030 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:19:31.549078 kubelet[2754]: I1212 17:19:31.549049 2754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:19:31.549728 kubelet[2754]: I1212 17:19:31.549695 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:19:31.550743 kubelet[2754]: E1212 17:19:31.550605 2754 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:19:31.610822 kubelet[2754]: I1212 17:19:31.610751 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:31.610822 kubelet[2754]: I1212 17:19:31.610795 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:31.611050 kubelet[2754]: I1212 17:19:31.610907 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:31.653533 kubelet[2754]: I1212 17:19:31.653433 2754 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:19:31.661982 kubelet[2754]: I1212 17:19:31.661948 2754 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:19:31.662092 kubelet[2754]: I1212 17:19:31.662041 2754 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:19:31.791989 kubelet[2754]: I1212 17:19:31.791939 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7eda471f4e21075ef66e14e0f10dfcb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eda471f4e21075ef66e14e0f10dfcb3\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:31.791989 kubelet[2754]: I1212 17:19:31.791989 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7eda471f4e21075ef66e14e0f10dfcb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7eda471f4e21075ef66e14e0f10dfcb3\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:31.792155 kubelet[2754]: I1212 17:19:31.792028 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:31.792155 kubelet[2754]: I1212 17:19:31.792060 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:31.792155 kubelet[2754]: I1212 17:19:31.792079 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:31.792155 kubelet[2754]: I1212 17:19:31.792094 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:31.792155 kubelet[2754]: I1212 17:19:31.792150 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7eda471f4e21075ef66e14e0f10dfcb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eda471f4e21075ef66e14e0f10dfcb3\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:19:31.792278 kubelet[2754]: I1212 17:19:31.792166 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:31.792278 kubelet[2754]: I1212 17:19:31.792206 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:19:31.916497 kubelet[2754]: E1212 17:19:31.916456 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:31.916497 kubelet[2754]: E1212 17:19:31.916460 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:31.917657 kubelet[2754]: E1212 17:19:31.917632 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:32.473761 kubelet[2754]: I1212 17:19:32.473693 2754 apiserver.go:52] "Watching apiserver" Dec 12 17:19:32.493583 kubelet[2754]: I1212 17:19:32.493539 2754 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:19:32.528220 kubelet[2754]: E1212 17:19:32.528066 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:32.528522 kubelet[2754]: I1212 17:19:32.528474 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:32.529010 kubelet[2754]: E1212 17:19:32.528954 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:32.536446 kubelet[2754]: E1212 17:19:32.535839 2754 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:19:32.536446 kubelet[2754]: E1212 17:19:32.536051 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:32.551640 kubelet[2754]: I1212 17:19:32.551483 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.551468554 podStartE2EDuration="1.551468554s" podCreationTimestamp="2025-12-12 17:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:19:32.551181474 +0000 UTC m=+1.134313521" watchObservedRunningTime="2025-12-12 17:19:32.551468554 +0000 UTC m=+1.134600561" Dec 12 17:19:32.571670 kubelet[2754]: I1212 17:19:32.571609 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5715940339999999 podStartE2EDuration="1.571594034s" podCreationTimestamp="2025-12-12 17:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:19:32.561403914 +0000 UTC m=+1.144535961" watchObservedRunningTime="2025-12-12 17:19:32.571594034 +0000 UTC m=+1.154726041" Dec 12 17:19:32.582927 kubelet[2754]: I1212 17:19:32.582654 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.582638034 podStartE2EDuration="1.582638034s" podCreationTimestamp="2025-12-12 17:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:19:32.573692394 +0000 UTC m=+1.156824481" watchObservedRunningTime="2025-12-12 17:19:32.582638034 +0000 UTC m=+1.165770081" Dec 12 17:19:33.529816 kubelet[2754]: E1212 17:19:33.529637 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:33.529816 kubelet[2754]: E1212 17:19:33.529736 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:34.299792 kubelet[2754]: E1212 17:19:34.299737 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:34.531317 kubelet[2754]: E1212 17:19:34.531275 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:36.170530 kubelet[2754]: I1212 17:19:36.170444 2754 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:19:36.171345 containerd[1581]: time="2025-12-12T17:19:36.171294789Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:19:36.171927 kubelet[2754]: I1212 17:19:36.171906 2754 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:19:37.233863 systemd[1]: Created slice kubepods-besteffort-pod47a4eb44_aeb3_4d29_b604_6c4a4abbfa9a.slice - libcontainer container kubepods-besteffort-pod47a4eb44_aeb3_4d29_b604_6c4a4abbfa9a.slice. Dec 12 17:19:37.294728 systemd[1]: Created slice kubepods-besteffort-pod5d4b3740_b4df_4b98_94a9_8e6c2fadee1f.slice - libcontainer container kubepods-besteffort-pod5d4b3740_b4df_4b98_94a9_8e6c2fadee1f.slice. Dec 12 17:19:37.421900 kubelet[2754]: I1212 17:19:37.421799 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vw7k\" (UniqueName: \"kubernetes.io/projected/47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a-kube-api-access-6vw7k\") pod \"kube-proxy-dq4kt\" (UID: \"47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a\") " pod="kube-system/kube-proxy-dq4kt" Dec 12 17:19:37.421900 kubelet[2754]: I1212 17:19:37.421850 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a-xtables-lock\") pod \"kube-proxy-dq4kt\" (UID: \"47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a\") " pod="kube-system/kube-proxy-dq4kt" Dec 12 17:19:37.421900 kubelet[2754]: I1212 17:19:37.421874 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a-lib-modules\") pod \"kube-proxy-dq4kt\" (UID: \"47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a\") " pod="kube-system/kube-proxy-dq4kt" Dec 12 17:19:37.422457 kubelet[2754]: I1212 17:19:37.422344 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8vv\" (UniqueName: \"kubernetes.io/projected/5d4b3740-b4df-4b98-94a9-8e6c2fadee1f-kube-api-access-dg8vv\") pod \"tigera-operator-7dcd859c48-cttnw\" (UID: \"5d4b3740-b4df-4b98-94a9-8e6c2fadee1f\") " pod="tigera-operator/tigera-operator-7dcd859c48-cttnw" Dec 12 17:19:37.422457 kubelet[2754]: I1212 17:19:37.422395 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a-kube-proxy\") pod \"kube-proxy-dq4kt\" (UID: \"47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a\") " pod="kube-system/kube-proxy-dq4kt" Dec 12 17:19:37.422457 kubelet[2754]: I1212 17:19:37.422413 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d4b3740-b4df-4b98-94a9-8e6c2fadee1f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cttnw\" (UID: \"5d4b3740-b4df-4b98-94a9-8e6c2fadee1f\") " pod="tigera-operator/tigera-operator-7dcd859c48-cttnw" Dec 12 17:19:37.546217 kubelet[2754]: E1212 17:19:37.546001 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:37.547399 containerd[1581]: time="2025-12-12T17:19:37.546982587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dq4kt,Uid:47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a,Namespace:kube-system,Attempt:0,}" Dec 12 17:19:37.584990 containerd[1581]: time="2025-12-12T17:19:37.584943175Z" level=info msg="connecting to shim ed77048f8e8c05e8f90c2b82f571db273b8dadd374baa763383c1e7e667abbbb" address="unix:///run/containerd/s/9d2e885905e6f88ee36ae09aab421e43c03a9458ea64cc2a5051fae54daf1df3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:37.599359 containerd[1581]: time="2025-12-12T17:19:37.599250661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cttnw,Uid:5d4b3740-b4df-4b98-94a9-8e6c2fadee1f,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:19:37.627791 systemd[1]: Started cri-containerd-ed77048f8e8c05e8f90c2b82f571db273b8dadd374baa763383c1e7e667abbbb.scope - libcontainer container ed77048f8e8c05e8f90c2b82f571db273b8dadd374baa763383c1e7e667abbbb. Dec 12 17:19:37.633143 containerd[1581]: time="2025-12-12T17:19:37.633081144Z" level=info msg="connecting to shim 88fccd0169556a891d392a54e414dca38288247ea1346d8ed700733ec1124974" address="unix:///run/containerd/s/810b45e91fa779fc0eba66ace43a4ce4f1d4204abf9f742caa3fae71f41d0f40" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:37.641388 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:19:37.641475 kernel: audit: type=1334 audit(1765559977.638:423): prog-id=131 op=LOAD Dec 12 17:19:37.638000 audit: BPF prog-id=131 op=LOAD Dec 12 17:19:37.643000 audit: BPF prog-id=132 op=LOAD Dec 12 17:19:37.643000 audit[2829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.648929 kernel: audit: type=1334 audit(1765559977.643:424): prog-id=132 op=LOAD Dec 12 17:19:37.648984 kernel: audit: type=1300 audit(1765559977.643:424): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.656943 kernel: audit: type=1327 audit(1765559977.643:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.643000 audit: BPF prog-id=132 op=UNLOAD Dec 12 17:19:37.657932 kernel: audit: type=1334 audit(1765559977.643:425): prog-id=132 op=UNLOAD Dec 12 17:19:37.643000 audit[2829]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.661036 kernel: audit: type=1300 audit(1765559977.643:425): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.661251 kernel: audit: type=1327 audit(1765559977.643:425): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.643000 audit: BPF prog-id=133 op=LOAD Dec 12 17:19:37.667776 kernel: audit: type=1334 audit(1765559977.643:426): prog-id=133 op=LOAD Dec 12 17:19:37.667836 kernel: audit: type=1300 audit(1765559977.643:426): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.643000 audit[2829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.674348 kernel: audit: type=1327 audit(1765559977.643:426): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.644000 audit: BPF prog-id=134 op=LOAD Dec 12 17:19:37.644000 audit[2829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.647000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:19:37.647000 audit[2829]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.647000 audit: BPF prog-id=133 op=UNLOAD Dec 12 17:19:37.647000 audit[2829]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.647000 audit: BPF prog-id=135 op=LOAD Dec 12 17:19:37.647000 audit[2829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2817 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373730343866386538633035653866393063326238326635373164 Dec 12 17:19:37.677730 systemd[1]: Started cri-containerd-88fccd0169556a891d392a54e414dca38288247ea1346d8ed700733ec1124974.scope - libcontainer container 88fccd0169556a891d392a54e414dca38288247ea1346d8ed700733ec1124974. Dec 12 17:19:37.680577 containerd[1581]: time="2025-12-12T17:19:37.680540910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dq4kt,Uid:47a4eb44-aeb3-4d29-b604-6c4a4abbfa9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed77048f8e8c05e8f90c2b82f571db273b8dadd374baa763383c1e7e667abbbb\"" Dec 12 17:19:37.681373 kubelet[2754]: E1212 17:19:37.681345 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:37.687058 containerd[1581]: time="2025-12-12T17:19:37.687018949Z" level=info msg="CreateContainer within sandbox \"ed77048f8e8c05e8f90c2b82f571db273b8dadd374baa763383c1e7e667abbbb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:19:37.688000 audit: BPF prog-id=136 op=LOAD Dec 12 17:19:37.688000 audit: BPF prog-id=137 op=LOAD Dec 12 17:19:37.688000 audit[2868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.689000 audit: BPF prog-id=137 op=UNLOAD Dec 12 17:19:37.689000 audit[2868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.689000 audit: BPF prog-id=138 op=LOAD Dec 12 17:19:37.689000 audit[2868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.689000 audit: BPF prog-id=139 op=LOAD Dec 12 17:19:37.689000 audit[2868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.689000 audit: BPF prog-id=139 op=UNLOAD Dec 12 17:19:37.689000 audit[2868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.689000 audit: BPF prog-id=138 op=UNLOAD Dec 12 17:19:37.689000 audit[2868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.689000 audit: BPF prog-id=140 op=LOAD Dec 12 17:19:37.689000 audit[2868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2852 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666363643031363935353661383931643339326135346534313464 Dec 12 17:19:37.702118 containerd[1581]: time="2025-12-12T17:19:37.702071079Z" level=info msg="Container 15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:37.713629 containerd[1581]: time="2025-12-12T17:19:37.713583588Z" level=info msg="CreateContainer within sandbox \"ed77048f8e8c05e8f90c2b82f571db273b8dadd374baa763383c1e7e667abbbb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3\"" Dec 12 17:19:37.715923 containerd[1581]: time="2025-12-12T17:19:37.715865962Z" level=info msg="StartContainer for \"15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3\"" Dec 12 17:19:37.717554 containerd[1581]: time="2025-12-12T17:19:37.717479212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cttnw,Uid:5d4b3740-b4df-4b98-94a9-8e6c2fadee1f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"88fccd0169556a891d392a54e414dca38288247ea1346d8ed700733ec1124974\"" Dec 12 17:19:37.719280 containerd[1581]: time="2025-12-12T17:19:37.719174462Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:19:37.720613 containerd[1581]: time="2025-12-12T17:19:37.720362029Z" level=info msg="connecting to shim 15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3" address="unix:///run/containerd/s/9d2e885905e6f88ee36ae09aab421e43c03a9458ea64cc2a5051fae54daf1df3" protocol=ttrpc version=3 Dec 12 17:19:37.750762 systemd[1]: Started cri-containerd-15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3.scope - libcontainer container 15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3. Dec 12 17:19:37.810000 audit: BPF prog-id=141 op=LOAD Dec 12 17:19:37.810000 audit[2900]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001fe3e8 a2=98 a3=0 items=0 ppid=2817 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613164323234373836393061343739646232616630323966353265 Dec 12 17:19:37.811000 audit: BPF prog-id=142 op=LOAD Dec 12 17:19:37.811000 audit[2900]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001fe168 a2=98 a3=0 items=0 ppid=2817 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613164323234373836393061343739646232616630323966353265 Dec 12 17:19:37.811000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:19:37.811000 audit[2900]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613164323234373836393061343739646232616630323966353265 Dec 12 17:19:37.811000 audit: BPF prog-id=141 op=UNLOAD Dec 12 17:19:37.811000 audit[2900]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613164323234373836393061343739646232616630323966353265 Dec 12 17:19:37.811000 audit: BPF prog-id=143 op=LOAD Dec 12 17:19:37.811000 audit[2900]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001fe648 a2=98 a3=0 items=0 ppid=2817 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613164323234373836393061343739646232616630323966353265 Dec 12 17:19:37.839185 containerd[1581]: time="2025-12-12T17:19:37.839117102Z" level=info msg="StartContainer for \"15a1d22478690a479db2af029f52e16c372ab7b962ef7fbbe54171085424ebd3\" returns successfully" Dec 12 17:19:37.998000 audit[2967]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:37.998000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc65a23e0 a2=0 a3=1 items=0 ppid=2913 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.998000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:19:37.998000 audit[2966]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:37.998000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddf8e360 a2=0 a3=1 items=0 ppid=2913 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:37.998000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:19:38.000000 audit[2969]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.000000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcddf9130 a2=0 a3=1 items=0 ppid=2913 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:19:38.004000 audit[2971]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.004000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8642c20 a2=0 a3=1 items=0 ppid=2913 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:19:38.004000 audit[2972]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.004000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc642670 a2=0 a3=1 items=0 ppid=2913 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:19:38.004000 audit[2974]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.004000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1566010 a2=0 a3=1 items=0 ppid=2913 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:19:38.100000 audit[2975]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.100000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc2c241f0 a2=0 a3=1 items=0 ppid=2913 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:19:38.104000 audit[2977]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.104000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe1081ef0 a2=0 a3=1 items=0 ppid=2913 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:19:38.108000 audit[2980]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.108000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe43e8380 a2=0 a3=1 items=0 ppid=2913 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:19:38.109000 audit[2981]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.109000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd15a14e0 a2=0 a3=1 items=0 ppid=2913 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:19:38.111000 audit[2983]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.111000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8875950 a2=0 a3=1 items=0 ppid=2913 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:19:38.113000 audit[2984]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.113000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1035620 a2=0 a3=1 items=0 ppid=2913 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:19:38.115000 audit[2986]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.115000 audit[2986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff15b96c0 a2=0 a3=1 items=0 ppid=2913 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:19:38.119000 audit[2989]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.119000 audit[2989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc824da70 a2=0 a3=1 items=0 ppid=2913 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:19:38.120000 audit[2990]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.120000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc06b2bc0 a2=0 a3=1 items=0 ppid=2913 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:19:38.122000 audit[2992]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.122000 audit[2992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffee8152b0 a2=0 a3=1 items=0 ppid=2913 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:19:38.123000 audit[2993]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.123000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7d6f040 a2=0 a3=1 items=0 ppid=2913 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:19:38.126000 audit[2995]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.126000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd7ef04e0 a2=0 a3=1 items=0 ppid=2913 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.126000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:19:38.130000 audit[2999]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.130000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcb126a70 a2=0 a3=1 items=0 ppid=2913 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:19:38.133000 audit[3002]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.133000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2c37e90 a2=0 a3=1 items=0 ppid=2913 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:19:38.135000 audit[3003]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.135000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffff7d8330 a2=0 a3=1 items=0 ppid=2913 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.135000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:19:38.138000 audit[3005]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.138000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe65be6d0 a2=0 a3=1 items=0 ppid=2913 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:19:38.141000 audit[3008]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.141000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4b69a30 a2=0 a3=1 items=0 ppid=2913 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:19:38.143000 audit[3009]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.143000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda8233c0 a2=0 a3=1 items=0 ppid=2913 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:19:38.145000 audit[3011]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:19:38.145000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcd4b0dc0 a2=0 a3=1 items=0 ppid=2913 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:19:38.165000 audit[3017]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:38.165000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc4d48640 a2=0 a3=1 items=0 ppid=2913 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:38.177000 audit[3017]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:38.177000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc4d48640 a2=0 a3=1 items=0 ppid=2913 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:38.179000 audit[3022]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.179000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc6321fb0 a2=0 a3=1 items=0 ppid=2913 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:19:38.181000 audit[3024]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.181000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff454ce70 a2=0 a3=1 items=0 ppid=2913 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:19:38.184000 audit[3027]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.184000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe395b540 a2=0 a3=1 items=0 ppid=2913 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:19:38.186000 audit[3028]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.186000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc28cbb60 a2=0 a3=1 items=0 ppid=2913 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:19:38.188000 audit[3030]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.188000 audit[3030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffdcc2fb0 a2=0 a3=1 items=0 ppid=2913 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:19:38.189000 audit[3031]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.189000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff53f2700 a2=0 a3=1 items=0 ppid=2913 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:19:38.192000 audit[3033]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.192000 audit[3033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcd561600 a2=0 a3=1 items=0 ppid=2913 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:19:38.196000 audit[3036]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.196000 audit[3036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd7a3e950 a2=0 a3=1 items=0 ppid=2913 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:19:38.197000 audit[3037]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.197000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc75fe070 a2=0 a3=1 items=0 ppid=2913 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:19:38.200000 audit[3039]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.200000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef735050 a2=0 a3=1 items=0 ppid=2913 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:19:38.201000 audit[3040]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.201000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc037df00 a2=0 a3=1 items=0 ppid=2913 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:19:38.204000 audit[3042]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.204000 audit[3042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3599810 a2=0 a3=1 items=0 ppid=2913 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:19:38.208000 audit[3045]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.208000 audit[3045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc85c08c0 a2=0 a3=1 items=0 ppid=2913 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:19:38.211000 audit[3048]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.211000 audit[3048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff033be20 a2=0 a3=1 items=0 ppid=2913 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:19:38.213000 audit[3049]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.213000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0d3b690 a2=0 a3=1 items=0 ppid=2913 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:19:38.215000 audit[3051]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.215000 audit[3051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe3b88f70 a2=0 a3=1 items=0 ppid=2913 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:19:38.219000 audit[3054]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.219000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc0cf4770 a2=0 a3=1 items=0 ppid=2913 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:19:38.220000 audit[3055]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.220000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd075b30 a2=0 a3=1 items=0 ppid=2913 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:19:38.222000 audit[3057]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.222000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd0f6bcb0 a2=0 a3=1 items=0 ppid=2913 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:19:38.224000 audit[3058]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.224000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3cd1d80 a2=0 a3=1 items=0 ppid=2913 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:19:38.226000 audit[3060]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.226000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffccb6dd50 a2=0 a3=1 items=0 ppid=2913 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:19:38.230000 audit[3063]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:19:38.230000 audit[3063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe6ff7a50 a2=0 a3=1 items=0 ppid=2913 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:19:38.233000 audit[3065]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:19:38.233000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff3bf6020 a2=0 a3=1 items=0 ppid=2913 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.233000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:38.233000 audit[3065]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:19:38.233000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff3bf6020 a2=0 a3=1 items=0 ppid=2913 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:38.233000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:38.544121 kubelet[2754]: E1212 17:19:38.543727 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:38.556172 kubelet[2754]: I1212 17:19:38.555702 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dq4kt" podStartSLOduration=1.55568428 podStartE2EDuration="1.55568428s" podCreationTimestamp="2025-12-12 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:19:38.555255957 +0000 UTC m=+7.138388004" watchObservedRunningTime="2025-12-12 17:19:38.55568428 +0000 UTC m=+7.138816287" Dec 12 17:19:39.028128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538947758.mount: Deactivated successfully. Dec 12 17:19:40.425265 containerd[1581]: time="2025-12-12T17:19:40.425198490Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:40.426719 containerd[1581]: time="2025-12-12T17:19:40.426490096Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:19:40.427543 containerd[1581]: time="2025-12-12T17:19:40.427485741Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:40.430000 containerd[1581]: time="2025-12-12T17:19:40.429968794Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:40.430955 containerd[1581]: time="2025-12-12T17:19:40.430669477Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.711457575s" Dec 12 17:19:40.430955 containerd[1581]: time="2025-12-12T17:19:40.430702197Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:19:40.435986 containerd[1581]: time="2025-12-12T17:19:40.435952703Z" level=info msg="CreateContainer within sandbox \"88fccd0169556a891d392a54e414dca38288247ea1346d8ed700733ec1124974\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:19:40.477381 containerd[1581]: time="2025-12-12T17:19:40.476793225Z" level=info msg="Container 224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:40.482405 containerd[1581]: time="2025-12-12T17:19:40.482361813Z" level=info msg="CreateContainer within sandbox \"88fccd0169556a891d392a54e414dca38288247ea1346d8ed700733ec1124974\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016\"" Dec 12 17:19:40.483274 containerd[1581]: time="2025-12-12T17:19:40.483245457Z" level=info msg="StartContainer for \"224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016\"" Dec 12 17:19:40.485782 containerd[1581]: time="2025-12-12T17:19:40.485671869Z" level=info msg="connecting to shim 224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016" address="unix:///run/containerd/s/810b45e91fa779fc0eba66ace43a4ce4f1d4204abf9f742caa3fae71f41d0f40" protocol=ttrpc version=3 Dec 12 17:19:40.530750 systemd[1]: Started cri-containerd-224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016.scope - libcontainer container 224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016. Dec 12 17:19:40.555000 audit: BPF prog-id=144 op=LOAD Dec 12 17:19:40.555000 audit: BPF prog-id=145 op=LOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.555000 audit: BPF prog-id=145 op=UNLOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.555000 audit: BPF prog-id=146 op=LOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.555000 audit: BPF prog-id=147 op=LOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.555000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.555000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.555000 audit: BPF prog-id=148 op=LOAD Dec 12 17:19:40.555000 audit[3074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2852 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:40.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232346563663165353164323230306666316532333363613034363038 Dec 12 17:19:40.571463 containerd[1581]: time="2025-12-12T17:19:40.571414174Z" level=info msg="StartContainer for \"224ecf1e51d2200ff1e233ca04608c22894ef0f968a890b4727803f187979016\" returns successfully" Dec 12 17:19:41.510445 kubelet[2754]: E1212 17:19:41.509750 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:41.554636 kubelet[2754]: E1212 17:19:41.554104 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:41.578990 kubelet[2754]: I1212 17:19:41.578926 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cttnw" podStartSLOduration=1.8661888009999998 podStartE2EDuration="4.578906823s" podCreationTimestamp="2025-12-12 17:19:37 +0000 UTC" firstStartedPulling="2025-12-12 17:19:37.71887722 +0000 UTC m=+6.302009267" lastFinishedPulling="2025-12-12 17:19:40.431595242 +0000 UTC m=+9.014727289" observedRunningTime="2025-12-12 17:19:41.566775527 +0000 UTC m=+10.149907574" watchObservedRunningTime="2025-12-12 17:19:41.578906823 +0000 UTC m=+10.162038870" Dec 12 17:19:42.561581 kubelet[2754]: E1212 17:19:42.561548 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:44.197836 kubelet[2754]: E1212 17:19:44.197758 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:44.307818 kubelet[2754]: E1212 17:19:44.307741 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:44.573375 kubelet[2754]: E1212 17:19:44.573346 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:45.968298 update_engine[1566]: I20251212 17:19:45.967631 1566 update_attempter.cc:509] Updating boot flags... Dec 12 17:19:46.025157 sudo[1794]: pam_unix(sudo:session): session closed for user root Dec 12 17:19:46.023000 audit[1794]: USER_END pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:46.030040 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:19:46.030139 kernel: audit: type=1106 audit(1765559986.023:503): pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:46.023000 audit[1794]: CRED_DISP pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:46.035478 kernel: audit: type=1104 audit(1765559986.023:504): pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:19:46.042176 sshd[1793]: Connection closed by 10.0.0.1 port 37476 Dec 12 17:19:46.042672 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Dec 12 17:19:46.043000 audit[1790]: USER_END pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:46.050855 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:37476.service: Deactivated successfully. Dec 12 17:19:46.043000 audit[1790]: CRED_DISP pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:46.053971 kernel: audit: type=1106 audit(1765559986.043:505): pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:46.054027 kernel: audit: type=1104 audit(1765559986.043:506): pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:19:46.054048 kernel: audit: type=1131 audit(1765559986.049:507): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:37476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:46.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:37476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:19:46.055227 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:19:46.056281 systemd[1]: session-7.scope: Consumed 8.403s CPU time, 210.6M memory peak. Dec 12 17:19:46.069642 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:19:46.082389 systemd-logind[1564]: Removed session 7. Dec 12 17:19:47.239000 audit[3182]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:47.239000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffd53f280 a2=0 a3=1 items=0 ppid=2913 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:47.246419 kernel: audit: type=1325 audit(1765559987.239:508): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:47.246555 kernel: audit: type=1300 audit(1765559987.239:508): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffd53f280 a2=0 a3=1 items=0 ppid=2913 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:47.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:47.251313 kernel: audit: type=1327 audit(1765559987.239:508): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:47.248000 audit[3182]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:47.253971 kernel: audit: type=1325 audit(1765559987.248:509): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:47.254035 kernel: audit: type=1300 audit(1765559987.248:509): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd53f280 a2=0 a3=1 items=0 ppid=2913 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:47.248000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd53f280 a2=0 a3=1 items=0 ppid=2913 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:47.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:47.264000 audit[3184]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:47.264000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc4b5de00 a2=0 a3=1 items=0 ppid=2913 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:47.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:47.272000 audit[3184]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:47.272000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4b5de00 a2=0 a3=1 items=0 ppid=2913 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:47.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.200000 audit[3186]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.203193 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:19:51.203329 kernel: audit: type=1325 audit(1765559991.200:512): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.200000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc2e31560 a2=0 a3=1 items=0 ppid=2913 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.210324 kernel: audit: type=1300 audit(1765559991.200:512): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc2e31560 a2=0 a3=1 items=0 ppid=2913 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.216324 kernel: audit: type=1327 audit(1765559991.200:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.207000 audit[3186]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.219277 kernel: audit: type=1325 audit(1765559991.207:513): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.207000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2e31560 a2=0 a3=1 items=0 ppid=2913 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.225186 kernel: audit: type=1300 audit(1765559991.207:513): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2e31560 a2=0 a3=1 items=0 ppid=2913 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.225275 kernel: audit: type=1327 audit(1765559991.207:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.221000 audit[3188]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.229577 kernel: audit: type=1325 audit(1765559991.221:514): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.229692 kernel: audit: type=1300 audit(1765559991.221:514): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd19dd2e0 a2=0 a3=1 items=0 ppid=2913 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.221000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd19dd2e0 a2=0 a3=1 items=0 ppid=2913 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.235260 kernel: audit: type=1327 audit(1765559991.221:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.236000 audit[3188]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:51.236000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd19dd2e0 a2=0 a3=1 items=0 ppid=2913 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:51.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:51.241558 kernel: audit: type=1325 audit(1765559991.236:515): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:52.250000 audit[3191]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:52.250000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdbbb2340 a2=0 a3=1 items=0 ppid=2913 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:52.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:52.256000 audit[3191]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:52.256000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdbbb2340 a2=0 a3=1 items=0 ppid=2913 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:52.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:53.790000 audit[3193]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:53.790000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc00c15f0 a2=0 a3=1 items=0 ppid=2913 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:53.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:53.796000 audit[3193]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:53.796000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc00c15f0 a2=0 a3=1 items=0 ppid=2913 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:53.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:53.838717 kubelet[2754]: I1212 17:19:53.838648 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wng4h\" (UniqueName: \"kubernetes.io/projected/d85629a9-93dd-4c50-ba48-b7e97d75fc82-kube-api-access-wng4h\") pod \"calico-typha-7bf5d6655d-whp7k\" (UID: \"d85629a9-93dd-4c50-ba48-b7e97d75fc82\") " pod="calico-system/calico-typha-7bf5d6655d-whp7k" Dec 12 17:19:53.838717 kubelet[2754]: I1212 17:19:53.838704 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d85629a9-93dd-4c50-ba48-b7e97d75fc82-typha-certs\") pod \"calico-typha-7bf5d6655d-whp7k\" (UID: \"d85629a9-93dd-4c50-ba48-b7e97d75fc82\") " pod="calico-system/calico-typha-7bf5d6655d-whp7k" Dec 12 17:19:53.838717 kubelet[2754]: I1212 17:19:53.838725 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d85629a9-93dd-4c50-ba48-b7e97d75fc82-tigera-ca-bundle\") pod \"calico-typha-7bf5d6655d-whp7k\" (UID: \"d85629a9-93dd-4c50-ba48-b7e97d75fc82\") " pod="calico-system/calico-typha-7bf5d6655d-whp7k" Dec 12 17:19:53.844393 systemd[1]: Created slice kubepods-besteffort-podd85629a9_93dd_4c50_ba48_b7e97d75fc82.slice - libcontainer container kubepods-besteffort-podd85629a9_93dd_4c50_ba48_b7e97d75fc82.slice. Dec 12 17:19:54.141807 systemd[1]: Created slice kubepods-besteffort-poddc9facb9_97c3_4cd8_bdca_4c94661d5881.slice - libcontainer container kubepods-besteffort-poddc9facb9_97c3_4cd8_bdca_4c94661d5881.slice. Dec 12 17:19:54.144023 kubelet[2754]: I1212 17:19:54.143954 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-lib-modules\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144023 kubelet[2754]: I1212 17:19:54.144000 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-var-lib-calico\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144185 kubelet[2754]: I1212 17:19:54.144061 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-cni-net-dir\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144185 kubelet[2754]: I1212 17:19:54.144109 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-flexvol-driver-host\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144185 kubelet[2754]: I1212 17:19:54.144144 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc9facb9-97c3-4cd8-bdca-4c94661d5881-node-certs\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144312 kubelet[2754]: I1212 17:19:54.144185 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9facb9-97c3-4cd8-bdca-4c94661d5881-tigera-ca-bundle\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144312 kubelet[2754]: I1212 17:19:54.144205 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-var-run-calico\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144312 kubelet[2754]: I1212 17:19:54.144277 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-cni-log-dir\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144312 kubelet[2754]: I1212 17:19:54.144301 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-cni-bin-dir\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.144554 kubelet[2754]: I1212 17:19:54.144362 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-xtables-lock\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.146784 kubelet[2754]: I1212 17:19:54.146219 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc9facb9-97c3-4cd8-bdca-4c94661d5881-policysync\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.146784 kubelet[2754]: I1212 17:19:54.146550 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59vxx\" (UniqueName: \"kubernetes.io/projected/dc9facb9-97c3-4cd8-bdca-4c94661d5881-kube-api-access-59vxx\") pod \"calico-node-th882\" (UID: \"dc9facb9-97c3-4cd8-bdca-4c94661d5881\") " pod="calico-system/calico-node-th882" Dec 12 17:19:54.151765 kubelet[2754]: E1212 17:19:54.151725 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:54.153199 containerd[1581]: time="2025-12-12T17:19:54.153144059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bf5d6655d-whp7k,Uid:d85629a9-93dd-4c50-ba48-b7e97d75fc82,Namespace:calico-system,Attempt:0,}" Dec 12 17:19:54.200032 containerd[1581]: time="2025-12-12T17:19:54.199984713Z" level=info msg="connecting to shim 725bf7c27e64544c31170d011b91c82e63d4892867a2ccd12232ede0eb082324" address="unix:///run/containerd/s/245cd91f611647037b9b76594cf3bc06820ae4f6e59ed956be4c256c0a85de38" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:54.231827 systemd[1]: Started cri-containerd-725bf7c27e64544c31170d011b91c82e63d4892867a2ccd12232ede0eb082324.scope - libcontainer container 725bf7c27e64544c31170d011b91c82e63d4892867a2ccd12232ede0eb082324. Dec 12 17:19:54.247000 audit: BPF prog-id=149 op=LOAD Dec 12 17:19:54.250000 audit: BPF prog-id=150 op=LOAD Dec 12 17:19:54.250000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.251000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:19:54.251000 audit[3216]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.251000 audit: BPF prog-id=151 op=LOAD Dec 12 17:19:54.251000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.252000 audit: BPF prog-id=152 op=LOAD Dec 12 17:19:54.252000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.252000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:19:54.252000 audit[3216]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.252000 audit: BPF prog-id=151 op=UNLOAD Dec 12 17:19:54.252000 audit[3216]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.252000 audit: BPF prog-id=153 op=LOAD Dec 12 17:19:54.252000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732356266376332376536343534346333313137306430313162393163 Dec 12 17:19:54.264380 kubelet[2754]: E1212 17:19:54.264332 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.264380 kubelet[2754]: W1212 17:19:54.264361 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.266434 kubelet[2754]: E1212 17:19:54.266387 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.312028 containerd[1581]: time="2025-12-12T17:19:54.311984138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bf5d6655d-whp7k,Uid:d85629a9-93dd-4c50-ba48-b7e97d75fc82,Namespace:calico-system,Attempt:0,} returns sandbox id \"725bf7c27e64544c31170d011b91c82e63d4892867a2ccd12232ede0eb082324\"" Dec 12 17:19:54.315612 kubelet[2754]: E1212 17:19:54.315498 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:54.319329 containerd[1581]: time="2025-12-12T17:19:54.319287072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:19:54.419112 kubelet[2754]: E1212 17:19:54.418268 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:19:54.447538 kubelet[2754]: E1212 17:19:54.447442 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.447538 kubelet[2754]: W1212 17:19:54.447471 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.447538 kubelet[2754]: E1212 17:19:54.447494 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.448051 kubelet[2754]: E1212 17:19:54.447941 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.448051 kubelet[2754]: W1212 17:19:54.447957 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.448051 kubelet[2754]: E1212 17:19:54.448012 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.448353 kubelet[2754]: E1212 17:19:54.448340 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.448483 kubelet[2754]: W1212 17:19:54.448422 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.448483 kubelet[2754]: E1212 17:19:54.448438 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.448834 kubelet[2754]: E1212 17:19:54.448786 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.448834 kubelet[2754]: W1212 17:19:54.448800 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.448834 kubelet[2754]: E1212 17:19:54.448814 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.449243 kubelet[2754]: E1212 17:19:54.449145 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.449243 kubelet[2754]: W1212 17:19:54.449159 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.449243 kubelet[2754]: E1212 17:19:54.449185 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.449588 kubelet[2754]: E1212 17:19:54.449462 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.449588 kubelet[2754]: W1212 17:19:54.449474 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.449588 kubelet[2754]: E1212 17:19:54.449485 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.449769 kubelet[2754]: E1212 17:19:54.449756 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.449820 kubelet[2754]: W1212 17:19:54.449810 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.449880 kubelet[2754]: E1212 17:19:54.449870 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.450123 kubelet[2754]: E1212 17:19:54.450110 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.450235 kubelet[2754]: W1212 17:19:54.450198 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.450235 kubelet[2754]: E1212 17:19:54.450215 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.450646 kubelet[2754]: E1212 17:19:54.450544 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.450646 kubelet[2754]: W1212 17:19:54.450556 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.450646 kubelet[2754]: E1212 17:19:54.450567 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.450795 kubelet[2754]: E1212 17:19:54.450783 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.450866 kubelet[2754]: W1212 17:19:54.450853 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.450930 kubelet[2754]: E1212 17:19:54.450921 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.451329 kubelet[2754]: E1212 17:19:54.451216 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.451329 kubelet[2754]: W1212 17:19:54.451233 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.451329 kubelet[2754]: E1212 17:19:54.451243 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.451481 kubelet[2754]: E1212 17:19:54.451470 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.451600 kubelet[2754]: W1212 17:19:54.451557 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.451600 kubelet[2754]: E1212 17:19:54.451573 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.452003 kubelet[2754]: E1212 17:19:54.451982 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:54.452486 kubelet[2754]: E1212 17:19:54.452112 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.452486 kubelet[2754]: W1212 17:19:54.452305 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.452486 kubelet[2754]: E1212 17:19:54.452323 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.452486 kubelet[2754]: I1212 17:19:54.452349 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c455fe7-f7d2-456e-ac64-f3619ba04a75-socket-dir\") pod \"csi-node-driver-p27qt\" (UID: \"9c455fe7-f7d2-456e-ac64-f3619ba04a75\") " pod="calico-system/csi-node-driver-p27qt" Dec 12 17:19:54.453149 kubelet[2754]: E1212 17:19:54.453103 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.453149 kubelet[2754]: W1212 17:19:54.453119 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.453149 kubelet[2754]: E1212 17:19:54.453145 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.453266 kubelet[2754]: I1212 17:19:54.453182 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c455fe7-f7d2-456e-ac64-f3619ba04a75-registration-dir\") pod \"csi-node-driver-p27qt\" (UID: \"9c455fe7-f7d2-456e-ac64-f3619ba04a75\") " pod="calico-system/csi-node-driver-p27qt" Dec 12 17:19:54.453294 containerd[1581]: time="2025-12-12T17:19:54.453206541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-th882,Uid:dc9facb9-97c3-4cd8-bdca-4c94661d5881,Namespace:calico-system,Attempt:0,}" Dec 12 17:19:54.453494 kubelet[2754]: E1212 17:19:54.453470 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.453494 kubelet[2754]: W1212 17:19:54.453489 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.453614 kubelet[2754]: E1212 17:19:54.453500 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.453747 kubelet[2754]: E1212 17:19:54.453732 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.453784 kubelet[2754]: W1212 17:19:54.453747 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.453784 kubelet[2754]: E1212 17:19:54.453761 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.454063 kubelet[2754]: E1212 17:19:54.454035 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.454093 kubelet[2754]: W1212 17:19:54.454062 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.454093 kubelet[2754]: E1212 17:19:54.454075 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.454405 kubelet[2754]: E1212 17:19:54.454375 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.454439 kubelet[2754]: W1212 17:19:54.454405 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.454439 kubelet[2754]: E1212 17:19:54.454419 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.454556 kubelet[2754]: I1212 17:19:54.454490 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c455fe7-f7d2-456e-ac64-f3619ba04a75-kubelet-dir\") pod \"csi-node-driver-p27qt\" (UID: \"9c455fe7-f7d2-456e-ac64-f3619ba04a75\") " pod="calico-system/csi-node-driver-p27qt" Dec 12 17:19:54.454955 kubelet[2754]: E1212 17:19:54.454831 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.454955 kubelet[2754]: W1212 17:19:54.454850 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.454955 kubelet[2754]: E1212 17:19:54.454864 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.455101 kubelet[2754]: E1212 17:19:54.455089 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.455192 kubelet[2754]: W1212 17:19:54.455176 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.455299 kubelet[2754]: E1212 17:19:54.455285 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.455637 kubelet[2754]: E1212 17:19:54.455575 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.455637 kubelet[2754]: W1212 17:19:54.455591 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.455637 kubelet[2754]: E1212 17:19:54.455602 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.456256 kubelet[2754]: E1212 17:19:54.456210 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.456256 kubelet[2754]: W1212 17:19:54.456227 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.456256 kubelet[2754]: E1212 17:19:54.456240 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.456918 kubelet[2754]: E1212 17:19:54.456780 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.456918 kubelet[2754]: W1212 17:19:54.456803 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.456918 kubelet[2754]: E1212 17:19:54.456821 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.458181 kubelet[2754]: E1212 17:19:54.458146 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.458411 kubelet[2754]: W1212 17:19:54.458369 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.458411 kubelet[2754]: E1212 17:19:54.458398 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.458821 kubelet[2754]: E1212 17:19:54.458779 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.458821 kubelet[2754]: W1212 17:19:54.458796 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.458821 kubelet[2754]: E1212 17:19:54.458807 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.459173 kubelet[2754]: E1212 17:19:54.459127 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.459173 kubelet[2754]: W1212 17:19:54.459140 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.459173 kubelet[2754]: E1212 17:19:54.459151 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.459672 kubelet[2754]: E1212 17:19:54.459548 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.459672 kubelet[2754]: W1212 17:19:54.459563 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.459672 kubelet[2754]: E1212 17:19:54.459575 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.459888 kubelet[2754]: E1212 17:19:54.459755 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.459888 kubelet[2754]: W1212 17:19:54.459764 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.459888 kubelet[2754]: E1212 17:19:54.459774 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.460123 kubelet[2754]: E1212 17:19:54.460108 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.460321 kubelet[2754]: W1212 17:19:54.460302 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.460383 kubelet[2754]: E1212 17:19:54.460372 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.480067 containerd[1581]: time="2025-12-12T17:19:54.479956555Z" level=info msg="connecting to shim 8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e" address="unix:///run/containerd/s/0cfca307f53a7e47662de9a1573d8ceb7143966524487b5911fab6feb31fb10e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:19:54.528064 systemd[1]: Started cri-containerd-8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e.scope - libcontainer container 8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e. Dec 12 17:19:54.540000 audit: BPF prog-id=154 op=LOAD Dec 12 17:19:54.540000 audit: BPF prog-id=155 op=LOAD Dec 12 17:19:54.540000 audit[3307]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.540000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:19:54.540000 audit[3307]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.540000 audit: BPF prog-id=156 op=LOAD Dec 12 17:19:54.540000 audit[3307]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.540000 audit: BPF prog-id=157 op=LOAD Dec 12 17:19:54.540000 audit[3307]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.540000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:19:54.540000 audit[3307]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.541000 audit: BPF prog-id=156 op=UNLOAD Dec 12 17:19:54.541000 audit[3307]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.541000 audit: BPF prog-id=158 op=LOAD Dec 12 17:19:54.541000 audit[3307]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3296 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323436363661616534366532663338383731363363353736646131 Dec 12 17:19:54.561900 kubelet[2754]: E1212 17:19:54.561865 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.562010 kubelet[2754]: W1212 17:19:54.561891 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.562034 kubelet[2754]: E1212 17:19:54.562008 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.562056 kubelet[2754]: I1212 17:19:54.562048 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c455fe7-f7d2-456e-ac64-f3619ba04a75-varrun\") pod \"csi-node-driver-p27qt\" (UID: \"9c455fe7-f7d2-456e-ac64-f3619ba04a75\") " pod="calico-system/csi-node-driver-p27qt" Dec 12 17:19:54.562423 kubelet[2754]: E1212 17:19:54.562398 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.562423 kubelet[2754]: W1212 17:19:54.562415 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.562576 kubelet[2754]: E1212 17:19:54.562427 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.562925 kubelet[2754]: E1212 17:19:54.562899 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.562925 kubelet[2754]: W1212 17:19:54.562918 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.562986 kubelet[2754]: E1212 17:19:54.562929 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.563107 kubelet[2754]: I1212 17:19:54.562951 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hd8g\" (UniqueName: \"kubernetes.io/projected/9c455fe7-f7d2-456e-ac64-f3619ba04a75-kube-api-access-8hd8g\") pod \"csi-node-driver-p27qt\" (UID: \"9c455fe7-f7d2-456e-ac64-f3619ba04a75\") " pod="calico-system/csi-node-driver-p27qt" Dec 12 17:19:54.563276 kubelet[2754]: E1212 17:19:54.563252 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.563276 kubelet[2754]: W1212 17:19:54.563271 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.563342 kubelet[2754]: E1212 17:19:54.563287 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.563568 kubelet[2754]: E1212 17:19:54.563464 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.563568 kubelet[2754]: W1212 17:19:54.563475 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.563568 kubelet[2754]: E1212 17:19:54.563485 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.563684 kubelet[2754]: E1212 17:19:54.563663 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.563684 kubelet[2754]: W1212 17:19:54.563675 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.563684 kubelet[2754]: E1212 17:19:54.563684 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.564008 kubelet[2754]: E1212 17:19:54.563837 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.564008 kubelet[2754]: W1212 17:19:54.563847 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.564008 kubelet[2754]: E1212 17:19:54.563855 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.564008 kubelet[2754]: E1212 17:19:54.563985 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.564008 kubelet[2754]: W1212 17:19:54.563992 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.564008 kubelet[2754]: E1212 17:19:54.564001 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.564385 kubelet[2754]: E1212 17:19:54.564233 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.564385 kubelet[2754]: W1212 17:19:54.564275 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.564385 kubelet[2754]: E1212 17:19:54.564288 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.564590 kubelet[2754]: E1212 17:19:54.564559 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.564590 kubelet[2754]: W1212 17:19:54.564573 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.565349 kubelet[2754]: E1212 17:19:54.564584 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.565603 kubelet[2754]: E1212 17:19:54.565584 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.565603 kubelet[2754]: W1212 17:19:54.565602 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.565693 kubelet[2754]: E1212 17:19:54.565616 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.565866 kubelet[2754]: E1212 17:19:54.565834 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.565866 kubelet[2754]: W1212 17:19:54.565850 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.565866 kubelet[2754]: E1212 17:19:54.565860 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.566085 kubelet[2754]: E1212 17:19:54.566071 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.566085 kubelet[2754]: W1212 17:19:54.566083 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.566301 kubelet[2754]: E1212 17:19:54.566092 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.566419 kubelet[2754]: E1212 17:19:54.566392 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.566497 kubelet[2754]: W1212 17:19:54.566460 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.566497 kubelet[2754]: E1212 17:19:54.566479 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.566801 kubelet[2754]: E1212 17:19:54.566781 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.566968 kubelet[2754]: W1212 17:19:54.566854 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.566968 kubelet[2754]: E1212 17:19:54.566871 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.567141 kubelet[2754]: E1212 17:19:54.567102 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.567141 kubelet[2754]: W1212 17:19:54.567114 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.567295 kubelet[2754]: E1212 17:19:54.567126 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.567503 kubelet[2754]: E1212 17:19:54.567486 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.567554 kubelet[2754]: W1212 17:19:54.567503 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.567554 kubelet[2754]: E1212 17:19:54.567551 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.567701 kubelet[2754]: E1212 17:19:54.567690 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.567701 kubelet[2754]: W1212 17:19:54.567701 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.567747 kubelet[2754]: E1212 17:19:54.567709 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.567845 kubelet[2754]: E1212 17:19:54.567834 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.567845 kubelet[2754]: W1212 17:19:54.567844 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.567899 kubelet[2754]: E1212 17:19:54.567851 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.568040 kubelet[2754]: E1212 17:19:54.568029 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.568067 kubelet[2754]: W1212 17:19:54.568041 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.568067 kubelet[2754]: E1212 17:19:54.568049 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.604294 kubelet[2754]: E1212 17:19:54.604182 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.604294 kubelet[2754]: W1212 17:19:54.604224 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.604294 kubelet[2754]: E1212 17:19:54.604244 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.667464 kubelet[2754]: E1212 17:19:54.667394 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.667464 kubelet[2754]: W1212 17:19:54.667421 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.667464 kubelet[2754]: E1212 17:19:54.667442 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.667691 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.671991 kubelet[2754]: W1212 17:19:54.667706 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.667716 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.667993 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.671991 kubelet[2754]: W1212 17:19:54.668010 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.668025 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.668205 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.671991 kubelet[2754]: W1212 17:19:54.668216 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.668225 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.671991 kubelet[2754]: E1212 17:19:54.668434 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.673578 kubelet[2754]: W1212 17:19:54.668444 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.673578 kubelet[2754]: E1212 17:19:54.668453 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.673578 kubelet[2754]: E1212 17:19:54.668669 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.673578 kubelet[2754]: W1212 17:19:54.668678 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.673578 kubelet[2754]: E1212 17:19:54.668687 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.673578 kubelet[2754]: E1212 17:19:54.670313 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.673578 kubelet[2754]: W1212 17:19:54.670332 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.673578 kubelet[2754]: E1212 17:19:54.670349 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.673578 kubelet[2754]: E1212 17:19:54.670565 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.673578 kubelet[2754]: W1212 17:19:54.670574 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.673791 kubelet[2754]: E1212 17:19:54.670585 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.673791 kubelet[2754]: E1212 17:19:54.670750 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.673791 kubelet[2754]: W1212 17:19:54.670758 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.673791 kubelet[2754]: E1212 17:19:54.670767 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.673791 kubelet[2754]: E1212 17:19:54.671105 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.673791 kubelet[2754]: W1212 17:19:54.671122 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.673791 kubelet[2754]: E1212 17:19:54.671148 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.686981 kubelet[2754]: E1212 17:19:54.686952 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:54.686981 kubelet[2754]: W1212 17:19:54.686976 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:54.687223 kubelet[2754]: E1212 17:19:54.686998 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:54.724285 containerd[1581]: time="2025-12-12T17:19:54.724223005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-th882,Uid:dc9facb9-97c3-4cd8-bdca-4c94661d5881,Namespace:calico-system,Attempt:0,} returns sandbox id \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\"" Dec 12 17:19:54.725114 kubelet[2754]: E1212 17:19:54.725090 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:54.810000 audit[3366]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:54.810000 audit[3366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff6e047b0 a2=0 a3=1 items=0 ppid=2913 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:54.820000 audit[3366]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:19:54.820000 audit[3366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff6e047b0 a2=0 a3=1 items=0 ppid=2913 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:54.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:19:56.024762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253368188.mount: Deactivated successfully. Dec 12 17:19:56.473686 containerd[1581]: time="2025-12-12T17:19:56.473555393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:56.479081 containerd[1581]: time="2025-12-12T17:19:56.479020043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:19:56.480048 containerd[1581]: time="2025-12-12T17:19:56.480012885Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:56.489537 containerd[1581]: time="2025-12-12T17:19:56.487320218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:56.492401 containerd[1581]: time="2025-12-12T17:19:56.492343986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.173005514s" Dec 12 17:19:56.492605 containerd[1581]: time="2025-12-12T17:19:56.492575507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:19:56.493947 containerd[1581]: time="2025-12-12T17:19:56.493882149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:19:56.510315 kubelet[2754]: E1212 17:19:56.510254 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:19:56.519942 containerd[1581]: time="2025-12-12T17:19:56.519876315Z" level=info msg="CreateContainer within sandbox \"725bf7c27e64544c31170d011b91c82e63d4892867a2ccd12232ede0eb082324\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:19:56.527543 containerd[1581]: time="2025-12-12T17:19:56.526503407Z" level=info msg="Container 3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:56.535537 containerd[1581]: time="2025-12-12T17:19:56.535446502Z" level=info msg="CreateContainer within sandbox \"725bf7c27e64544c31170d011b91c82e63d4892867a2ccd12232ede0eb082324\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a\"" Dec 12 17:19:56.536553 containerd[1581]: time="2025-12-12T17:19:56.536434824Z" level=info msg="StartContainer for \"3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a\"" Dec 12 17:19:56.537843 containerd[1581]: time="2025-12-12T17:19:56.537808347Z" level=info msg="connecting to shim 3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a" address="unix:///run/containerd/s/245cd91f611647037b9b76594cf3bc06820ae4f6e59ed956be4c256c0a85de38" protocol=ttrpc version=3 Dec 12 17:19:56.560764 systemd[1]: Started cri-containerd-3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a.scope - libcontainer container 3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a. Dec 12 17:19:56.571000 audit: BPF prog-id=159 op=LOAD Dec 12 17:19:56.574077 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 12 17:19:56.574142 kernel: audit: type=1334 audit(1765559996.571:538): prog-id=159 op=LOAD Dec 12 17:19:56.573000 audit: BPF prog-id=160 op=LOAD Dec 12 17:19:56.576578 kernel: audit: type=1334 audit(1765559996.573:539): prog-id=160 op=LOAD Dec 12 17:19:56.579803 kernel: audit: type=1300 audit(1765559996.573:539): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.583881 kernel: audit: type=1327 audit(1765559996.573:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:19:56.585249 kernel: audit: type=1334 audit(1765559996.573:540): prog-id=160 op=UNLOAD Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.589282 kernel: audit: type=1300 audit(1765559996.573:540): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.592463 kernel: audit: type=1327 audit(1765559996.573:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: BPF prog-id=161 op=LOAD Dec 12 17:19:56.593538 kernel: audit: type=1334 audit(1765559996.573:541): prog-id=161 op=LOAD Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.598223 kernel: audit: type=1300 audit(1765559996.573:541): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.598299 kernel: audit: type=1327 audit(1765559996.573:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: BPF prog-id=162 op=LOAD Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: BPF prog-id=161 op=UNLOAD Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.573000 audit: BPF prog-id=163 op=LOAD Dec 12 17:19:56.573000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3204 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:56.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613837353162623737666634353930326132303839386262636666 Dec 12 17:19:56.622335 containerd[1581]: time="2025-12-12T17:19:56.622223895Z" level=info msg="StartContainer for \"3aa8751bb77ff45902a20898bbcffb3d9cfc09b5b2cf4a7219fbf55cb6c2070a\" returns successfully" Dec 12 17:19:57.530840 containerd[1581]: time="2025-12-12T17:19:57.530778599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:57.531798 containerd[1581]: time="2025-12-12T17:19:57.531306120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:19:57.532306 containerd[1581]: time="2025-12-12T17:19:57.532261401Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:57.536476 containerd[1581]: time="2025-12-12T17:19:57.536408608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:19:57.537064 containerd[1581]: time="2025-12-12T17:19:57.537032929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.04310882s" Dec 12 17:19:57.537113 containerd[1581]: time="2025-12-12T17:19:57.537070209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:19:57.541593 containerd[1581]: time="2025-12-12T17:19:57.541106256Z" level=info msg="CreateContainer within sandbox \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:19:57.565546 containerd[1581]: time="2025-12-12T17:19:57.564711695Z" level=info msg="Container a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:19:57.573449 containerd[1581]: time="2025-12-12T17:19:57.573397429Z" level=info msg="CreateContainer within sandbox \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3\"" Dec 12 17:19:57.574249 containerd[1581]: time="2025-12-12T17:19:57.574213511Z" level=info msg="StartContainer for \"a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3\"" Dec 12 17:19:57.576369 containerd[1581]: time="2025-12-12T17:19:57.576327114Z" level=info msg="connecting to shim a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3" address="unix:///run/containerd/s/0cfca307f53a7e47662de9a1573d8ceb7143966524487b5911fab6feb31fb10e" protocol=ttrpc version=3 Dec 12 17:19:57.601787 systemd[1]: Started cri-containerd-a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3.scope - libcontainer container a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3. Dec 12 17:19:57.611968 kubelet[2754]: E1212 17:19:57.611848 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:57.655000 audit: BPF prog-id=164 op=LOAD Dec 12 17:19:57.655000 audit[3418]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3296 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:57.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313235353062653932303561313735303362663864643937303066 Dec 12 17:19:57.656000 audit: BPF prog-id=165 op=LOAD Dec 12 17:19:57.656000 audit[3418]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3296 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:57.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313235353062653932303561313735303362663864643937303066 Dec 12 17:19:57.656000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:19:57.656000 audit[3418]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:57.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313235353062653932303561313735303362663864643937303066 Dec 12 17:19:57.656000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:19:57.656000 audit[3418]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:57.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313235353062653932303561313735303362663864643937303066 Dec 12 17:19:57.656000 audit: BPF prog-id=166 op=LOAD Dec 12 17:19:57.656000 audit[3418]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3296 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:19:57.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135313235353062653932303561313735303362663864643937303066 Dec 12 17:19:57.679258 kubelet[2754]: E1212 17:19:57.679222 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.679258 kubelet[2754]: W1212 17:19:57.679248 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.679436 kubelet[2754]: E1212 17:19:57.679279 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.679489 kubelet[2754]: E1212 17:19:57.679466 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.679553 kubelet[2754]: W1212 17:19:57.679479 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.679587 kubelet[2754]: E1212 17:19:57.679555 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.679743 kubelet[2754]: E1212 17:19:57.679732 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.679743 kubelet[2754]: W1212 17:19:57.679743 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.679799 kubelet[2754]: E1212 17:19:57.679752 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.679924 kubelet[2754]: E1212 17:19:57.679913 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.679959 kubelet[2754]: W1212 17:19:57.679924 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.679959 kubelet[2754]: E1212 17:19:57.679933 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.680131 kubelet[2754]: E1212 17:19:57.680120 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.680131 kubelet[2754]: W1212 17:19:57.680131 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.680205 kubelet[2754]: E1212 17:19:57.680140 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.680321 kubelet[2754]: E1212 17:19:57.680310 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.680321 kubelet[2754]: W1212 17:19:57.680321 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.680381 kubelet[2754]: E1212 17:19:57.680329 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.680487 kubelet[2754]: E1212 17:19:57.680477 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.680487 kubelet[2754]: W1212 17:19:57.680487 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.680543 kubelet[2754]: E1212 17:19:57.680495 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.680668 kubelet[2754]: E1212 17:19:57.680658 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.680668 kubelet[2754]: W1212 17:19:57.680667 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.680723 kubelet[2754]: E1212 17:19:57.680676 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.680855 kubelet[2754]: E1212 17:19:57.680824 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.680855 kubelet[2754]: W1212 17:19:57.680837 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.680855 kubelet[2754]: E1212 17:19:57.680845 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.680988 kubelet[2754]: E1212 17:19:57.680973 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.680988 kubelet[2754]: W1212 17:19:57.680985 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.680988 kubelet[2754]: E1212 17:19:57.680992 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.681446 kubelet[2754]: E1212 17:19:57.681144 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.681446 kubelet[2754]: W1212 17:19:57.681151 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.681446 kubelet[2754]: E1212 17:19:57.681170 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.681446 kubelet[2754]: E1212 17:19:57.681288 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.681446 kubelet[2754]: W1212 17:19:57.681294 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.681446 kubelet[2754]: E1212 17:19:57.681301 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.681446 kubelet[2754]: E1212 17:19:57.681480 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.681446 kubelet[2754]: W1212 17:19:57.681489 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.681446 kubelet[2754]: E1212 17:19:57.681498 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.682023 kubelet[2754]: E1212 17:19:57.681678 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.682023 kubelet[2754]: W1212 17:19:57.681687 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.682023 kubelet[2754]: E1212 17:19:57.681694 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.682023 kubelet[2754]: E1212 17:19:57.681830 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:19:57.682023 kubelet[2754]: W1212 17:19:57.681836 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:19:57.682023 kubelet[2754]: E1212 17:19:57.681843 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:19:57.689657 systemd[1]: cri-containerd-a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3.scope: Deactivated successfully. Dec 12 17:19:57.697000 audit: BPF prog-id=166 op=UNLOAD Dec 12 17:19:57.870436 containerd[1581]: time="2025-12-12T17:19:57.870294600Z" level=info msg="StartContainer for \"a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3\" returns successfully" Dec 12 17:19:57.873667 containerd[1581]: time="2025-12-12T17:19:57.873293125Z" level=info msg="received container exit event container_id:\"a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3\" id:\"a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3\" pid:3432 exited_at:{seconds:1765559997 nanos:704382726}" Dec 12 17:19:57.922199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a512550be9205a17503bf8dd9700f067146a4aa3d63847fe4a951593c2957ed3-rootfs.mount: Deactivated successfully. Dec 12 17:19:58.510902 kubelet[2754]: E1212 17:19:58.510849 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:19:58.622175 kubelet[2754]: I1212 17:19:58.622106 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:19:58.622685 kubelet[2754]: E1212 17:19:58.622658 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:58.623061 kubelet[2754]: E1212 17:19:58.623019 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:19:58.624250 containerd[1581]: time="2025-12-12T17:19:58.624212822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:19:58.645662 kubelet[2754]: I1212 17:19:58.645557 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bf5d6655d-whp7k" podStartSLOduration=3.468299933 podStartE2EDuration="5.645490135s" podCreationTimestamp="2025-12-12 17:19:53 +0000 UTC" firstStartedPulling="2025-12-12 17:19:54.316486427 +0000 UTC m=+22.899618474" lastFinishedPulling="2025-12-12 17:19:56.493676629 +0000 UTC m=+25.076808676" observedRunningTime="2025-12-12 17:19:57.630200203 +0000 UTC m=+26.213332290" watchObservedRunningTime="2025-12-12 17:19:58.645490135 +0000 UTC m=+27.228622182" Dec 12 17:20:00.510662 kubelet[2754]: E1212 17:20:00.510585 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:00.677768 containerd[1581]: time="2025-12-12T17:20:00.677671740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:20:00.678577 containerd[1581]: time="2025-12-12T17:20:00.678277740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:20:00.679405 containerd[1581]: time="2025-12-12T17:20:00.679358622Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:20:00.681476 containerd[1581]: time="2025-12-12T17:20:00.681424585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:20:00.682549 containerd[1581]: time="2025-12-12T17:20:00.682481226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.057852924s" Dec 12 17:20:00.682549 containerd[1581]: time="2025-12-12T17:20:00.682542106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:20:00.686701 containerd[1581]: time="2025-12-12T17:20:00.686632392Z" level=info msg="CreateContainer within sandbox \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:20:00.702197 containerd[1581]: time="2025-12-12T17:20:00.701759252Z" level=info msg="Container 4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:20:00.710836 containerd[1581]: time="2025-12-12T17:20:00.710788265Z" level=info msg="CreateContainer within sandbox \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27\"" Dec 12 17:20:00.711247 containerd[1581]: time="2025-12-12T17:20:00.711217785Z" level=info msg="StartContainer for \"4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27\"" Dec 12 17:20:00.713443 containerd[1581]: time="2025-12-12T17:20:00.713408468Z" level=info msg="connecting to shim 4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27" address="unix:///run/containerd/s/0cfca307f53a7e47662de9a1573d8ceb7143966524487b5911fab6feb31fb10e" protocol=ttrpc version=3 Dec 12 17:20:00.741806 systemd[1]: Started cri-containerd-4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27.scope - libcontainer container 4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27. Dec 12 17:20:00.795000 audit: BPF prog-id=167 op=LOAD Dec 12 17:20:00.795000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3296 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:00.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393139386334323135306634616134343435613736333338616534 Dec 12 17:20:00.795000 audit: BPF prog-id=168 op=LOAD Dec 12 17:20:00.795000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3296 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:00.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393139386334323135306634616134343435613736333338616534 Dec 12 17:20:00.795000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:20:00.795000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:00.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393139386334323135306634616134343435613736333338616534 Dec 12 17:20:00.795000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:20:00.795000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:00.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393139386334323135306634616134343435613736333338616534 Dec 12 17:20:00.795000 audit: BPF prog-id=169 op=LOAD Dec 12 17:20:00.795000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3296 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:00.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393139386334323135306634616134343435613736333338616534 Dec 12 17:20:00.816169 containerd[1581]: time="2025-12-12T17:20:00.816109928Z" level=info msg="StartContainer for \"4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27\" returns successfully" Dec 12 17:20:01.420335 systemd[1]: cri-containerd-4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27.scope: Deactivated successfully. Dec 12 17:20:01.420686 systemd[1]: cri-containerd-4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27.scope: Consumed 523ms CPU time, 180.3M memory peak, 2M read from disk, 165.9M written to disk. Dec 12 17:20:01.424347 containerd[1581]: time="2025-12-12T17:20:01.424270880Z" level=info msg="received container exit event container_id:\"4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27\" id:\"4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27\" pid:3513 exited_at:{seconds:1765560001 nanos:424039800}" Dec 12 17:20:01.426000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:20:01.434701 kubelet[2754]: I1212 17:20:01.434662 2754 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:20:01.449482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e9198c42150f4aa4445a76338ae4ba4e9355fdc4298278bdcbbecd34da69a27-rootfs.mount: Deactivated successfully. Dec 12 17:20:01.560902 systemd[1]: Created slice kubepods-burstable-pod58bcfb02_e7ab_42c9_aa93_1365df71ae6d.slice - libcontainer container kubepods-burstable-pod58bcfb02_e7ab_42c9_aa93_1365df71ae6d.slice. Dec 12 17:20:01.570892 systemd[1]: Created slice kubepods-besteffort-pod2ac1174a_7255_43cd_9145_6ba385a2a343.slice - libcontainer container kubepods-besteffort-pod2ac1174a_7255_43cd_9145_6ba385a2a343.slice. Dec 12 17:20:01.578411 systemd[1]: Created slice kubepods-burstable-poda725ab6d_a9ea_4de3_a9de_4d649ce6ecf7.slice - libcontainer container kubepods-burstable-poda725ab6d_a9ea_4de3_a9de_4d649ce6ecf7.slice. Dec 12 17:20:01.584184 systemd[1]: Created slice kubepods-besteffort-podf493074d_c6eb_434c_b64e_346bcf34db0d.slice - libcontainer container kubepods-besteffort-podf493074d_c6eb_434c_b64e_346bcf34db0d.slice. Dec 12 17:20:01.592370 systemd[1]: Created slice kubepods-besteffort-pod0e2d55e4_7343_4bc1_8a02_a707014e8ced.slice - libcontainer container kubepods-besteffort-pod0e2d55e4_7343_4bc1_8a02_a707014e8ced.slice. Dec 12 17:20:01.598187 systemd[1]: Created slice kubepods-besteffort-podf35c0998_e01c_46ee_bdc1_a591da003d92.slice - libcontainer container kubepods-besteffort-podf35c0998_e01c_46ee_bdc1_a591da003d92.slice. Dec 12 17:20:01.604836 systemd[1]: Created slice kubepods-besteffort-podcb1421d6_b78b_486c_8c39_3bb1de51e7d3.slice - libcontainer container kubepods-besteffort-podcb1421d6_b78b_486c_8c39_3bb1de51e7d3.slice. Dec 12 17:20:01.623483 kubelet[2754]: I1212 17:20:01.623417 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqn8f\" (UniqueName: \"kubernetes.io/projected/58bcfb02-e7ab-42c9-aa93-1365df71ae6d-kube-api-access-pqn8f\") pod \"coredns-674b8bbfcf-785fj\" (UID: \"58bcfb02-e7ab-42c9-aa93-1365df71ae6d\") " pod="kube-system/coredns-674b8bbfcf-785fj" Dec 12 17:20:01.623483 kubelet[2754]: I1212 17:20:01.623460 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f493074d-c6eb-434c-b64e-346bcf34db0d-goldmane-key-pair\") pod \"goldmane-666569f655-nj7gb\" (UID: \"f493074d-c6eb-434c-b64e-346bcf34db0d\") " pod="calico-system/goldmane-666569f655-nj7gb" Dec 12 17:20:01.623483 kubelet[2754]: I1212 17:20:01.623479 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ac1174a-7255-43cd-9145-6ba385a2a343-tigera-ca-bundle\") pod \"calico-kube-controllers-578b47d77d-m8dkw\" (UID: \"2ac1174a-7255-43cd-9145-6ba385a2a343\") " pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" Dec 12 17:20:01.623483 kubelet[2754]: I1212 17:20:01.623498 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-ca-bundle\") pod \"whisker-7b5d6bc8b5-x7m2w\" (UID: \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\") " pod="calico-system/whisker-7b5d6bc8b5-x7m2w" Dec 12 17:20:01.624975 kubelet[2754]: I1212 17:20:01.624396 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f35c0998-e01c-46ee-bdc1-a591da003d92-calico-apiserver-certs\") pod \"calico-apiserver-84646c749c-5xrgw\" (UID: \"f35c0998-e01c-46ee-bdc1-a591da003d92\") " pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" Dec 12 17:20:01.624975 kubelet[2754]: I1212 17:20:01.624496 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqlm\" (UniqueName: \"kubernetes.io/projected/f35c0998-e01c-46ee-bdc1-a591da003d92-kube-api-access-9cqlm\") pod \"calico-apiserver-84646c749c-5xrgw\" (UID: \"f35c0998-e01c-46ee-bdc1-a591da003d92\") " pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" Dec 12 17:20:01.624975 kubelet[2754]: I1212 17:20:01.624621 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4jj\" (UniqueName: \"kubernetes.io/projected/0e2d55e4-7343-4bc1-8a02-a707014e8ced-kube-api-access-sv4jj\") pod \"calico-apiserver-84646c749c-wxdfq\" (UID: \"0e2d55e4-7343-4bc1-8a02-a707014e8ced\") " pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" Dec 12 17:20:01.624975 kubelet[2754]: I1212 17:20:01.624654 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzgn\" (UniqueName: \"kubernetes.io/projected/2ac1174a-7255-43cd-9145-6ba385a2a343-kube-api-access-cfzgn\") pod \"calico-kube-controllers-578b47d77d-m8dkw\" (UID: \"2ac1174a-7255-43cd-9145-6ba385a2a343\") " pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" Dec 12 17:20:01.624975 kubelet[2754]: I1212 17:20:01.624679 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bcfb02-e7ab-42c9-aa93-1365df71ae6d-config-volume\") pod \"coredns-674b8bbfcf-785fj\" (UID: \"58bcfb02-e7ab-42c9-aa93-1365df71ae6d\") " pod="kube-system/coredns-674b8bbfcf-785fj" Dec 12 17:20:01.625097 kubelet[2754]: I1212 17:20:01.624697 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f493074d-c6eb-434c-b64e-346bcf34db0d-goldmane-ca-bundle\") pod \"goldmane-666569f655-nj7gb\" (UID: \"f493074d-c6eb-434c-b64e-346bcf34db0d\") " pod="calico-system/goldmane-666569f655-nj7gb" Dec 12 17:20:01.625097 kubelet[2754]: I1212 17:20:01.624734 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc9tr\" (UniqueName: \"kubernetes.io/projected/f493074d-c6eb-434c-b64e-346bcf34db0d-kube-api-access-hc9tr\") pod \"goldmane-666569f655-nj7gb\" (UID: \"f493074d-c6eb-434c-b64e-346bcf34db0d\") " pod="calico-system/goldmane-666569f655-nj7gb" Dec 12 17:20:01.625097 kubelet[2754]: I1212 17:20:01.624785 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7-config-volume\") pod \"coredns-674b8bbfcf-ldm6n\" (UID: \"a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7\") " pod="kube-system/coredns-674b8bbfcf-ldm6n" Dec 12 17:20:01.625097 kubelet[2754]: I1212 17:20:01.624837 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e2d55e4-7343-4bc1-8a02-a707014e8ced-calico-apiserver-certs\") pod \"calico-apiserver-84646c749c-wxdfq\" (UID: \"0e2d55e4-7343-4bc1-8a02-a707014e8ced\") " pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" Dec 12 17:20:01.625097 kubelet[2754]: I1212 17:20:01.624875 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdtl\" (UniqueName: \"kubernetes.io/projected/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-kube-api-access-2bdtl\") pod \"whisker-7b5d6bc8b5-x7m2w\" (UID: \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\") " pod="calico-system/whisker-7b5d6bc8b5-x7m2w" Dec 12 17:20:01.625227 kubelet[2754]: I1212 17:20:01.624911 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-backend-key-pair\") pod \"whisker-7b5d6bc8b5-x7m2w\" (UID: \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\") " pod="calico-system/whisker-7b5d6bc8b5-x7m2w" Dec 12 17:20:01.625227 kubelet[2754]: I1212 17:20:01.624928 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f493074d-c6eb-434c-b64e-346bcf34db0d-config\") pod \"goldmane-666569f655-nj7gb\" (UID: \"f493074d-c6eb-434c-b64e-346bcf34db0d\") " pod="calico-system/goldmane-666569f655-nj7gb" Dec 12 17:20:01.625227 kubelet[2754]: I1212 17:20:01.624971 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27rc\" (UniqueName: \"kubernetes.io/projected/a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7-kube-api-access-t27rc\") pod \"coredns-674b8bbfcf-ldm6n\" (UID: \"a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7\") " pod="kube-system/coredns-674b8bbfcf-ldm6n" Dec 12 17:20:01.638524 kubelet[2754]: E1212 17:20:01.638207 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:01.639270 containerd[1581]: time="2025-12-12T17:20:01.639237315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:20:01.866252 kubelet[2754]: E1212 17:20:01.866199 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:01.867387 containerd[1581]: time="2025-12-12T17:20:01.867334966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-785fj,Uid:58bcfb02-e7ab-42c9-aa93-1365df71ae6d,Namespace:kube-system,Attempt:0,}" Dec 12 17:20:01.877071 containerd[1581]: time="2025-12-12T17:20:01.877000738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b47d77d-m8dkw,Uid:2ac1174a-7255-43cd-9145-6ba385a2a343,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:01.884432 kubelet[2754]: E1212 17:20:01.882666 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:01.885556 containerd[1581]: time="2025-12-12T17:20:01.885391949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldm6n,Uid:a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7,Namespace:kube-system,Attempt:0,}" Dec 12 17:20:01.892336 containerd[1581]: time="2025-12-12T17:20:01.892277638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nj7gb,Uid:f493074d-c6eb-434c-b64e-346bcf34db0d,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:01.897528 containerd[1581]: time="2025-12-12T17:20:01.897464844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-wxdfq,Uid:0e2d55e4-7343-4bc1-8a02-a707014e8ced,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:20:01.902605 containerd[1581]: time="2025-12-12T17:20:01.902532371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-5xrgw,Uid:f35c0998-e01c-46ee-bdc1-a591da003d92,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:20:01.909918 containerd[1581]: time="2025-12-12T17:20:01.909831980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5d6bc8b5-x7m2w,Uid:cb1421d6-b78b-486c-8c39-3bb1de51e7d3,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:02.012244 containerd[1581]: time="2025-12-12T17:20:02.012185750Z" level=error msg="Failed to destroy network for sandbox \"82fc47fc56c16b81c7a10ef35ca21b9a0a56a792eab80e40be6d5a965549581c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.016498 containerd[1581]: time="2025-12-12T17:20:02.016431475Z" level=error msg="Failed to destroy network for sandbox \"f0791cfe76e0a0c095d6347565c9fb6a1e73bc326bd9ea7b3a6128b9a902c299\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.016672 containerd[1581]: time="2025-12-12T17:20:02.016521395Z" level=error msg="Failed to destroy network for sandbox \"cc66436cae8d44c86eed143a8d1f03f9146443c55b8d62bab1f5989a900573db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.017751 containerd[1581]: time="2025-12-12T17:20:02.017556636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-785fj,Uid:58bcfb02-e7ab-42c9-aa93-1365df71ae6d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82fc47fc56c16b81c7a10ef35ca21b9a0a56a792eab80e40be6d5a965549581c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.019759 containerd[1581]: time="2025-12-12T17:20:02.019696079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b47d77d-m8dkw,Uid:2ac1174a-7255-43cd-9145-6ba385a2a343,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc66436cae8d44c86eed143a8d1f03f9146443c55b8d62bab1f5989a900573db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.020958 containerd[1581]: time="2025-12-12T17:20:02.020567400Z" level=error msg="Failed to destroy network for sandbox \"082a8770dba40376cc8c37e5083c55112f5125108b0d8a64281fe31fdd4bb20c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.021233 kubelet[2754]: E1212 17:20:02.021100 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc66436cae8d44c86eed143a8d1f03f9146443c55b8d62bab1f5989a900573db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.021233 kubelet[2754]: E1212 17:20:02.021194 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc66436cae8d44c86eed143a8d1f03f9146443c55b8d62bab1f5989a900573db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" Dec 12 17:20:02.021233 kubelet[2754]: E1212 17:20:02.021216 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc66436cae8d44c86eed143a8d1f03f9146443c55b8d62bab1f5989a900573db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" Dec 12 17:20:02.021364 containerd[1581]: time="2025-12-12T17:20:02.021126801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-5xrgw,Uid:f35c0998-e01c-46ee-bdc1-a591da003d92,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0791cfe76e0a0c095d6347565c9fb6a1e73bc326bd9ea7b3a6128b9a902c299\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.021421 kubelet[2754]: E1212 17:20:02.021270 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-578b47d77d-m8dkw_calico-system(2ac1174a-7255-43cd-9145-6ba385a2a343)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-578b47d77d-m8dkw_calico-system(2ac1174a-7255-43cd-9145-6ba385a2a343)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc66436cae8d44c86eed143a8d1f03f9146443c55b8d62bab1f5989a900573db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:02.022826 kubelet[2754]: E1212 17:20:02.022650 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0791cfe76e0a0c095d6347565c9fb6a1e73bc326bd9ea7b3a6128b9a902c299\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.022826 kubelet[2754]: E1212 17:20:02.022712 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0791cfe76e0a0c095d6347565c9fb6a1e73bc326bd9ea7b3a6128b9a902c299\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" Dec 12 17:20:02.022826 kubelet[2754]: E1212 17:20:02.022731 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0791cfe76e0a0c095d6347565c9fb6a1e73bc326bd9ea7b3a6128b9a902c299\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" Dec 12 17:20:02.022974 kubelet[2754]: E1212 17:20:02.022781 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84646c749c-5xrgw_calico-apiserver(f35c0998-e01c-46ee-bdc1-a591da003d92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84646c749c-5xrgw_calico-apiserver(f35c0998-e01c-46ee-bdc1-a591da003d92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0791cfe76e0a0c095d6347565c9fb6a1e73bc326bd9ea7b3a6128b9a902c299\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:02.023808 kubelet[2754]: E1212 17:20:02.023277 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82fc47fc56c16b81c7a10ef35ca21b9a0a56a792eab80e40be6d5a965549581c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.024065 kubelet[2754]: E1212 17:20:02.024038 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82fc47fc56c16b81c7a10ef35ca21b9a0a56a792eab80e40be6d5a965549581c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-785fj" Dec 12 17:20:02.024167 kubelet[2754]: E1212 17:20:02.024137 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82fc47fc56c16b81c7a10ef35ca21b9a0a56a792eab80e40be6d5a965549581c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-785fj" Dec 12 17:20:02.024300 kubelet[2754]: E1212 17:20:02.024273 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-785fj_kube-system(58bcfb02-e7ab-42c9-aa93-1365df71ae6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-785fj_kube-system(58bcfb02-e7ab-42c9-aa93-1365df71ae6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82fc47fc56c16b81c7a10ef35ca21b9a0a56a792eab80e40be6d5a965549581c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-785fj" podUID="58bcfb02-e7ab-42c9-aa93-1365df71ae6d" Dec 12 17:20:02.027367 containerd[1581]: time="2025-12-12T17:20:02.027293168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldm6n,Uid:a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a8770dba40376cc8c37e5083c55112f5125108b0d8a64281fe31fdd4bb20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.027594 kubelet[2754]: E1212 17:20:02.027552 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a8770dba40376cc8c37e5083c55112f5125108b0d8a64281fe31fdd4bb20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.027644 kubelet[2754]: E1212 17:20:02.027611 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a8770dba40376cc8c37e5083c55112f5125108b0d8a64281fe31fdd4bb20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ldm6n" Dec 12 17:20:02.027644 kubelet[2754]: E1212 17:20:02.027636 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a8770dba40376cc8c37e5083c55112f5125108b0d8a64281fe31fdd4bb20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ldm6n" Dec 12 17:20:02.027711 kubelet[2754]: E1212 17:20:02.027677 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ldm6n_kube-system(a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ldm6n_kube-system(a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"082a8770dba40376cc8c37e5083c55112f5125108b0d8a64281fe31fdd4bb20c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ldm6n" podUID="a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7" Dec 12 17:20:02.035851 containerd[1581]: time="2025-12-12T17:20:02.035694898Z" level=error msg="Failed to destroy network for sandbox \"a0901ee82d61c7aca3530cf2e1e74c0b8202a2666859a61a5cf5f1335fbb3d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.038005 containerd[1581]: time="2025-12-12T17:20:02.037878341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5d6bc8b5-x7m2w,Uid:cb1421d6-b78b-486c-8c39-3bb1de51e7d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0901ee82d61c7aca3530cf2e1e74c0b8202a2666859a61a5cf5f1335fbb3d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.038335 kubelet[2754]: E1212 17:20:02.038289 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0901ee82d61c7aca3530cf2e1e74c0b8202a2666859a61a5cf5f1335fbb3d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.038410 kubelet[2754]: E1212 17:20:02.038360 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0901ee82d61c7aca3530cf2e1e74c0b8202a2666859a61a5cf5f1335fbb3d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b5d6bc8b5-x7m2w" Dec 12 17:20:02.038410 kubelet[2754]: E1212 17:20:02.038380 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0901ee82d61c7aca3530cf2e1e74c0b8202a2666859a61a5cf5f1335fbb3d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b5d6bc8b5-x7m2w" Dec 12 17:20:02.039302 kubelet[2754]: E1212 17:20:02.038435 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b5d6bc8b5-x7m2w_calico-system(cb1421d6-b78b-486c-8c39-3bb1de51e7d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b5d6bc8b5-x7m2w_calico-system(cb1421d6-b78b-486c-8c39-3bb1de51e7d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0901ee82d61c7aca3530cf2e1e74c0b8202a2666859a61a5cf5f1335fbb3d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b5d6bc8b5-x7m2w" podUID="cb1421d6-b78b-486c-8c39-3bb1de51e7d3" Dec 12 17:20:02.043209 containerd[1581]: time="2025-12-12T17:20:02.043153307Z" level=error msg="Failed to destroy network for sandbox \"e7bc6549eb42dad7bab0fc72b57a45350884665d4428bead7d9abf5073326813\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.046098 containerd[1581]: time="2025-12-12T17:20:02.046042111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nj7gb,Uid:f493074d-c6eb-434c-b64e-346bcf34db0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc6549eb42dad7bab0fc72b57a45350884665d4428bead7d9abf5073326813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.046346 kubelet[2754]: E1212 17:20:02.046301 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc6549eb42dad7bab0fc72b57a45350884665d4428bead7d9abf5073326813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.047121 kubelet[2754]: E1212 17:20:02.047096 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc6549eb42dad7bab0fc72b57a45350884665d4428bead7d9abf5073326813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nj7gb" Dec 12 17:20:02.047186 kubelet[2754]: E1212 17:20:02.047127 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc6549eb42dad7bab0fc72b57a45350884665d4428bead7d9abf5073326813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nj7gb" Dec 12 17:20:02.047226 kubelet[2754]: E1212 17:20:02.047177 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nj7gb_calico-system(f493074d-c6eb-434c-b64e-346bcf34db0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nj7gb_calico-system(f493074d-c6eb-434c-b64e-346bcf34db0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7bc6549eb42dad7bab0fc72b57a45350884665d4428bead7d9abf5073326813\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:02.052233 containerd[1581]: time="2025-12-12T17:20:02.052120998Z" level=error msg="Failed to destroy network for sandbox \"cb244a238b4d8e4c6ee059ff1d64ecccb15e41fe6bdadd611f457fb1c5fa1ff8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.054180 containerd[1581]: time="2025-12-12T17:20:02.054101880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-wxdfq,Uid:0e2d55e4-7343-4bc1-8a02-a707014e8ced,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb244a238b4d8e4c6ee059ff1d64ecccb15e41fe6bdadd611f457fb1c5fa1ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.054377 kubelet[2754]: E1212 17:20:02.054337 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb244a238b4d8e4c6ee059ff1d64ecccb15e41fe6bdadd611f457fb1c5fa1ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.054428 kubelet[2754]: E1212 17:20:02.054394 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb244a238b4d8e4c6ee059ff1d64ecccb15e41fe6bdadd611f457fb1c5fa1ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" Dec 12 17:20:02.054428 kubelet[2754]: E1212 17:20:02.054416 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb244a238b4d8e4c6ee059ff1d64ecccb15e41fe6bdadd611f457fb1c5fa1ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" Dec 12 17:20:02.054501 kubelet[2754]: E1212 17:20:02.054464 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84646c749c-wxdfq_calico-apiserver(0e2d55e4-7343-4bc1-8a02-a707014e8ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84646c749c-wxdfq_calico-apiserver(0e2d55e4-7343-4bc1-8a02-a707014e8ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb244a238b4d8e4c6ee059ff1d64ecccb15e41fe6bdadd611f457fb1c5fa1ff8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:02.518319 systemd[1]: Created slice kubepods-besteffort-pod9c455fe7_f7d2_456e_ac64_f3619ba04a75.slice - libcontainer container kubepods-besteffort-pod9c455fe7_f7d2_456e_ac64_f3619ba04a75.slice. Dec 12 17:20:02.521640 containerd[1581]: time="2025-12-12T17:20:02.521498840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p27qt,Uid:9c455fe7-f7d2-456e-ac64-f3619ba04a75,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:02.580277 containerd[1581]: time="2025-12-12T17:20:02.580204870Z" level=error msg="Failed to destroy network for sandbox \"e55621c0b174fd79d67dce682eb5fcc16d1735f6e0c0b52f31a52bed0f9df420\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.582315 containerd[1581]: time="2025-12-12T17:20:02.582267472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p27qt,Uid:9c455fe7-f7d2-456e-ac64-f3619ba04a75,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55621c0b174fd79d67dce682eb5fcc16d1735f6e0c0b52f31a52bed0f9df420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.582715 kubelet[2754]: E1212 17:20:02.582674 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55621c0b174fd79d67dce682eb5fcc16d1735f6e0c0b52f31a52bed0f9df420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:20:02.582823 kubelet[2754]: E1212 17:20:02.582741 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55621c0b174fd79d67dce682eb5fcc16d1735f6e0c0b52f31a52bed0f9df420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p27qt" Dec 12 17:20:02.582823 kubelet[2754]: E1212 17:20:02.582763 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55621c0b174fd79d67dce682eb5fcc16d1735f6e0c0b52f31a52bed0f9df420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p27qt" Dec 12 17:20:02.582911 kubelet[2754]: E1212 17:20:02.582812 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e55621c0b174fd79d67dce682eb5fcc16d1735f6e0c0b52f31a52bed0f9df420\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:05.938167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156387291.mount: Deactivated successfully. Dec 12 17:20:06.182665 containerd[1581]: time="2025-12-12T17:20:06.182590142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:20:06.183146 containerd[1581]: time="2025-12-12T17:20:06.183083023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:20:06.184052 containerd[1581]: time="2025-12-12T17:20:06.183992184Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:20:06.185882 containerd[1581]: time="2025-12-12T17:20:06.185834785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:20:06.186600 containerd[1581]: time="2025-12-12T17:20:06.186407626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.547015871s" Dec 12 17:20:06.186600 containerd[1581]: time="2025-12-12T17:20:06.186435826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:20:06.203655 containerd[1581]: time="2025-12-12T17:20:06.203424762Z" level=info msg="CreateContainer within sandbox \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:20:06.212905 containerd[1581]: time="2025-12-12T17:20:06.212653250Z" level=info msg="Container 488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:20:06.223708 containerd[1581]: time="2025-12-12T17:20:06.223665980Z" level=info msg="CreateContainer within sandbox \"8124666aae46e2f3887163c576da145189f6aac45216963843c17add6796749e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172\"" Dec 12 17:20:06.224190 containerd[1581]: time="2025-12-12T17:20:06.224159301Z" level=info msg="StartContainer for \"488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172\"" Dec 12 17:20:06.225773 containerd[1581]: time="2025-12-12T17:20:06.225742622Z" level=info msg="connecting to shim 488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172" address="unix:///run/containerd/s/0cfca307f53a7e47662de9a1573d8ceb7143966524487b5911fab6feb31fb10e" protocol=ttrpc version=3 Dec 12 17:20:06.248799 systemd[1]: Started cri-containerd-488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172.scope - libcontainer container 488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172. Dec 12 17:20:06.303000 audit: BPF prog-id=170 op=LOAD Dec 12 17:20:06.305935 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 12 17:20:06.305994 kernel: audit: type=1334 audit(1765560006.303:558): prog-id=170 op=LOAD Dec 12 17:20:06.306032 kernel: audit: type=1300 audit(1765560006.303:558): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.303000 audit[3816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.311907 kernel: audit: type=1327 audit(1765560006.303:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.311994 kernel: audit: type=1334 audit(1765560006.304:559): prog-id=171 op=LOAD Dec 12 17:20:06.304000 audit: BPF prog-id=171 op=LOAD Dec 12 17:20:06.312641 kernel: audit: type=1300 audit(1765560006.304:559): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.304000 audit[3816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.319335 kernel: audit: type=1327 audit(1765560006.304:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.319536 kernel: audit: type=1334 audit(1765560006.304:560): prog-id=171 op=UNLOAD Dec 12 17:20:06.304000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:20:06.304000 audit[3816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.323036 kernel: audit: type=1300 audit(1765560006.304:560): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.326267 kernel: audit: type=1327 audit(1765560006.304:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.326338 kernel: audit: type=1334 audit(1765560006.304:561): prog-id=170 op=UNLOAD Dec 12 17:20:06.304000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:20:06.304000 audit[3816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.304000 audit: BPF prog-id=172 op=LOAD Dec 12 17:20:06.304000 audit[3816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3296 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:06.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383637336532393935323233633865633837346163383235386435 Dec 12 17:20:06.340732 containerd[1581]: time="2025-12-12T17:20:06.340694089Z" level=info msg="StartContainer for \"488673e2995223c8ec874ac8258d50136b3a79df73658ff84401ef3748553172\" returns successfully" Dec 12 17:20:06.507551 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:20:06.507724 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:20:06.678233 kubelet[2754]: E1212 17:20:06.677251 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:06.697778 kubelet[2754]: I1212 17:20:06.692849 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-th882" podStartSLOduration=1.232576317 podStartE2EDuration="12.692833214s" podCreationTimestamp="2025-12-12 17:19:54 +0000 UTC" firstStartedPulling="2025-12-12 17:19:54.72695017 +0000 UTC m=+23.310082177" lastFinishedPulling="2025-12-12 17:20:06.187207027 +0000 UTC m=+34.770339074" observedRunningTime="2025-12-12 17:20:06.692284694 +0000 UTC m=+35.275416741" watchObservedRunningTime="2025-12-12 17:20:06.692833214 +0000 UTC m=+35.275965261" Dec 12 17:20:06.763866 kubelet[2754]: I1212 17:20:06.763821 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-ca-bundle\") pod \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\" (UID: \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\") " Dec 12 17:20:06.764165 kubelet[2754]: I1212 17:20:06.764111 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-backend-key-pair\") pod \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\" (UID: \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\") " Dec 12 17:20:06.764442 kubelet[2754]: I1212 17:20:06.764168 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdtl\" (UniqueName: \"kubernetes.io/projected/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-kube-api-access-2bdtl\") pod \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\" (UID: \"cb1421d6-b78b-486c-8c39-3bb1de51e7d3\") " Dec 12 17:20:06.774243 kubelet[2754]: I1212 17:20:06.774188 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cb1421d6-b78b-486c-8c39-3bb1de51e7d3" (UID: "cb1421d6-b78b-486c-8c39-3bb1de51e7d3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:20:06.785208 kubelet[2754]: I1212 17:20:06.784771 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-kube-api-access-2bdtl" (OuterVolumeSpecName: "kube-api-access-2bdtl") pod "cb1421d6-b78b-486c-8c39-3bb1de51e7d3" (UID: "cb1421d6-b78b-486c-8c39-3bb1de51e7d3"). InnerVolumeSpecName "kube-api-access-2bdtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:20:06.786032 kubelet[2754]: I1212 17:20:06.785451 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cb1421d6-b78b-486c-8c39-3bb1de51e7d3" (UID: "cb1421d6-b78b-486c-8c39-3bb1de51e7d3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:20:06.865335 kubelet[2754]: I1212 17:20:06.865295 2754 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2bdtl\" (UniqueName: \"kubernetes.io/projected/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-kube-api-access-2bdtl\") on node \"localhost\" DevicePath \"\"" Dec 12 17:20:06.865542 kubelet[2754]: I1212 17:20:06.865528 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:20:06.865634 kubelet[2754]: I1212 17:20:06.865612 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb1421d6-b78b-486c-8c39-3bb1de51e7d3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:20:06.939116 systemd[1]: var-lib-kubelet-pods-cb1421d6\x2db78b\x2d486c\x2d8c39\x2d3bb1de51e7d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2bdtl.mount: Deactivated successfully. Dec 12 17:20:06.939231 systemd[1]: var-lib-kubelet-pods-cb1421d6\x2db78b\x2d486c\x2d8c39\x2d3bb1de51e7d3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:20:06.975458 systemd[1]: Removed slice kubepods-besteffort-podcb1421d6_b78b_486c_8c39_3bb1de51e7d3.slice - libcontainer container kubepods-besteffort-podcb1421d6_b78b_486c_8c39_3bb1de51e7d3.slice. Dec 12 17:20:07.028601 systemd[1]: Created slice kubepods-besteffort-pod62dd324a_6db6_477a_9870_2e631369a8d1.slice - libcontainer container kubepods-besteffort-pod62dd324a_6db6_477a_9870_2e631369a8d1.slice. Dec 12 17:20:07.066918 kubelet[2754]: I1212 17:20:07.066819 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/62dd324a-6db6-477a-9870-2e631369a8d1-whisker-backend-key-pair\") pod \"whisker-7847574df-8wxnl\" (UID: \"62dd324a-6db6-477a-9870-2e631369a8d1\") " pod="calico-system/whisker-7847574df-8wxnl" Dec 12 17:20:07.066918 kubelet[2754]: I1212 17:20:07.066868 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62dd324a-6db6-477a-9870-2e631369a8d1-whisker-ca-bundle\") pod \"whisker-7847574df-8wxnl\" (UID: \"62dd324a-6db6-477a-9870-2e631369a8d1\") " pod="calico-system/whisker-7847574df-8wxnl" Dec 12 17:20:07.066918 kubelet[2754]: I1212 17:20:07.066887 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kglt2\" (UniqueName: \"kubernetes.io/projected/62dd324a-6db6-477a-9870-2e631369a8d1-kube-api-access-kglt2\") pod \"whisker-7847574df-8wxnl\" (UID: \"62dd324a-6db6-477a-9870-2e631369a8d1\") " pod="calico-system/whisker-7847574df-8wxnl" Dec 12 17:20:07.331572 containerd[1581]: time="2025-12-12T17:20:07.331492026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7847574df-8wxnl,Uid:62dd324a-6db6-477a-9870-2e631369a8d1,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:07.514560 kubelet[2754]: I1212 17:20:07.513101 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1421d6-b78b-486c-8c39-3bb1de51e7d3" path="/var/lib/kubelet/pods/cb1421d6-b78b-486c-8c39-3bb1de51e7d3/volumes" Dec 12 17:20:07.583092 systemd-networkd[1497]: cali4c523fa8711: Link UP Dec 12 17:20:07.583732 systemd-networkd[1497]: cali4c523fa8711: Gained carrier Dec 12 17:20:07.601803 containerd[1581]: 2025-12-12 17:20:07.418 [INFO][3882] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:20:07.601803 containerd[1581]: 2025-12-12 17:20:07.458 [INFO][3882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7847574df--8wxnl-eth0 whisker-7847574df- calico-system 62dd324a-6db6-477a-9870-2e631369a8d1 899 0 2025-12-12 17:20:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7847574df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7847574df-8wxnl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4c523fa8711 [] [] }} ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-" Dec 12 17:20:07.601803 containerd[1581]: 2025-12-12 17:20:07.458 [INFO][3882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.601803 containerd[1581]: 2025-12-12 17:20:07.531 [INFO][3897] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" HandleID="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Workload="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.531 [INFO][3897] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" HandleID="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Workload="localhost-k8s-whisker--7847574df--8wxnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136b20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7847574df-8wxnl", "timestamp":"2025-12-12 17:20:07.531366439 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.531 [INFO][3897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.531 [INFO][3897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.531 [INFO][3897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.542 [INFO][3897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" host="localhost" Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.549 [INFO][3897] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.554 [INFO][3897] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.556 [INFO][3897] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.559 [INFO][3897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:07.602019 containerd[1581]: 2025-12-12 17:20:07.559 [INFO][3897] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" host="localhost" Dec 12 17:20:07.602236 containerd[1581]: 2025-12-12 17:20:07.561 [INFO][3897] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf Dec 12 17:20:07.602236 containerd[1581]: 2025-12-12 17:20:07.565 [INFO][3897] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" host="localhost" Dec 12 17:20:07.602236 containerd[1581]: 2025-12-12 17:20:07.572 [INFO][3897] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" host="localhost" Dec 12 17:20:07.602236 containerd[1581]: 2025-12-12 17:20:07.572 [INFO][3897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" host="localhost" Dec 12 17:20:07.602236 containerd[1581]: 2025-12-12 17:20:07.572 [INFO][3897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:07.602236 containerd[1581]: 2025-12-12 17:20:07.572 [INFO][3897] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" HandleID="k8s-pod-network.473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Workload="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.602347 containerd[1581]: 2025-12-12 17:20:07.574 [INFO][3882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7847574df--8wxnl-eth0", GenerateName:"whisker-7847574df-", Namespace:"calico-system", SelfLink:"", UID:"62dd324a-6db6-477a-9870-2e631369a8d1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7847574df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7847574df-8wxnl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c523fa8711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:07.602347 containerd[1581]: 2025-12-12 17:20:07.574 [INFO][3882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.602419 containerd[1581]: 2025-12-12 17:20:07.574 [INFO][3882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c523fa8711 ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.602419 containerd[1581]: 2025-12-12 17:20:07.584 [INFO][3882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.602458 containerd[1581]: 2025-12-12 17:20:07.584 [INFO][3882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7847574df--8wxnl-eth0", GenerateName:"whisker-7847574df-", Namespace:"calico-system", SelfLink:"", UID:"62dd324a-6db6-477a-9870-2e631369a8d1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7847574df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf", Pod:"whisker-7847574df-8wxnl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c523fa8711", MAC:"ea:ad:5d:54:b7:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:07.602499 containerd[1581]: 2025-12-12 17:20:07.599 [INFO][3882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" Namespace="calico-system" Pod="whisker-7847574df-8wxnl" WorkloadEndpoint="localhost-k8s-whisker--7847574df--8wxnl-eth0" Dec 12 17:20:07.652367 containerd[1581]: time="2025-12-12T17:20:07.652319584Z" level=info msg="connecting to shim 473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf" address="unix:///run/containerd/s/ecd616f2ceb9d39d16d728d3a592806641bede7b9d53fd77ebeabc9d20315435" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:07.672770 kubelet[2754]: I1212 17:20:07.672735 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:20:07.673173 kubelet[2754]: E1212 17:20:07.673154 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:07.695082 systemd[1]: Started cri-containerd-473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf.scope - libcontainer container 473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf. Dec 12 17:20:07.704000 audit: BPF prog-id=173 op=LOAD Dec 12 17:20:07.705000 audit: BPF prog-id=174 op=LOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.705000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.705000 audit: BPF prog-id=175 op=LOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.705000 audit: BPF prog-id=176 op=LOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.705000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.705000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.705000 audit: BPF prog-id=177 op=LOAD Dec 12 17:20:07.705000 audit[3929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3919 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:07.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333934306130643264383434643165393432353036623635303830 Dec 12 17:20:07.708076 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:07.737136 containerd[1581]: time="2025-12-12T17:20:07.737077977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7847574df-8wxnl,Uid:62dd324a-6db6-477a-9870-2e631369a8d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"473940a0d2d844d1e942506b650801eb3fed2dc6dcc94fe6d440c248ebb37ddf\"" Dec 12 17:20:07.740749 containerd[1581]: time="2025-12-12T17:20:07.740700100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:20:07.949839 containerd[1581]: time="2025-12-12T17:20:07.949496401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:07.956105 containerd[1581]: time="2025-12-12T17:20:07.956023167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:20:07.956228 containerd[1581]: time="2025-12-12T17:20:07.956158767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:07.960275 kubelet[2754]: E1212 17:20:07.960218 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:20:07.961961 kubelet[2754]: E1212 17:20:07.961896 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:20:07.973801 kubelet[2754]: E1212 17:20:07.973724 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d83762e8f24b14ac1baa2822839bf5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kglt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7847574df-8wxnl_calico-system(62dd324a-6db6-477a-9870-2e631369a8d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:07.976752 containerd[1581]: time="2025-12-12T17:20:07.976703985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:20:08.181646 containerd[1581]: time="2025-12-12T17:20:08.181583393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:08.190904 containerd[1581]: time="2025-12-12T17:20:08.190824920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:20:08.191025 containerd[1581]: time="2025-12-12T17:20:08.190945120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:08.191215 kubelet[2754]: E1212 17:20:08.191175 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:20:08.191416 kubelet[2754]: E1212 17:20:08.191231 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:20:08.191471 kubelet[2754]: E1212 17:20:08.191364 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kglt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7847574df-8wxnl_calico-system(62dd324a-6db6-477a-9870-2e631369a8d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:08.192652 kubelet[2754]: E1212 17:20:08.192602 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1" Dec 12 17:20:08.679527 kubelet[2754]: E1212 17:20:08.679321 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1" Dec 12 17:20:08.716000 audit[4059]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:08.716000 audit[4059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc4666760 a2=0 a3=1 items=0 ppid=2913 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:08.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:08.729000 audit[4059]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:08.729000 audit[4059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4666760 a2=0 a3=1 items=0 ppid=2913 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:08.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:09.178716 systemd-networkd[1497]: cali4c523fa8711: Gained IPv6LL Dec 12 17:20:09.683202 kubelet[2754]: E1212 17:20:09.683137 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1" Dec 12 17:20:11.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.23:22-10.0.0.1:39186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:11.113058 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:39186.service - OpenSSH per-connection server daemon (10.0.0.1:39186). Dec 12 17:20:11.201460 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 39186 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:11.200000 audit[4112]: USER_ACCT pid=4112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.203000 audit[4112]: CRED_ACQ pid=4112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.203000 audit[4112]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdedf5640 a2=3 a3=0 items=0 ppid=1 pid=4112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:11.203000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:11.205844 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:11.212130 systemd-logind[1564]: New session 8 of user core. Dec 12 17:20:11.217742 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:20:11.223000 audit[4112]: USER_START pid=4112 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.225000 audit[4123]: CRED_ACQ pid=4123 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.381763 sshd[4123]: Connection closed by 10.0.0.1 port 39186 Dec 12 17:20:11.382372 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:11.385000 audit[4112]: USER_END pid=4112 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.386571 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 12 17:20:11.386635 kernel: audit: type=1106 audit(1765560011.385:579): pid=4112 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.389997 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:39186.service: Deactivated successfully. Dec 12 17:20:11.385000 audit[4112]: CRED_DISP pid=4112 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.392132 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:20:11.393137 kernel: audit: type=1104 audit(1765560011.385:580): pid=4112 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:11.393198 kernel: audit: type=1131 audit(1765560011.389:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.23:22-10.0.0.1:39186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:11.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.23:22-10.0.0.1:39186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:11.396131 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:20:11.397057 systemd-logind[1564]: Removed session 8. Dec 12 17:20:11.746359 kubelet[2754]: I1212 17:20:11.746220 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:20:11.750131 kubelet[2754]: E1212 17:20:11.750075 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:11.790000 audit[4152]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:11.790000 audit[4152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc833d6c0 a2=0 a3=1 items=0 ppid=2913 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:11.796999 kernel: audit: type=1325 audit(1765560011.790:582): table=filter:121 family=2 entries=21 op=nft_register_rule pid=4152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:11.797097 kernel: audit: type=1300 audit(1765560011.790:582): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc833d6c0 a2=0 a3=1 items=0 ppid=2913 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:11.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:11.799004 kernel: audit: type=1327 audit(1765560011.790:582): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:11.811000 audit[4152]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:11.811000 audit[4152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc833d6c0 a2=0 a3=1 items=0 ppid=2913 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:11.817378 kernel: audit: type=1325 audit(1765560011.811:583): table=nat:122 family=2 entries=19 op=nft_register_chain pid=4152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:11.817459 kernel: audit: type=1300 audit(1765560011.811:583): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc833d6c0 a2=0 a3=1 items=0 ppid=2913 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:11.817487 kernel: audit: type=1327 audit(1765560011.811:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:11.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:12.345000 audit: BPF prog-id=178 op=LOAD Dec 12 17:20:12.346586 kernel: audit: type=1334 audit(1765560012.345:584): prog-id=178 op=LOAD Dec 12 17:20:12.345000 audit[4187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1484998 a2=98 a3=ffffd1484988 items=0 ppid=4155 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.345000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:20:12.345000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:20:12.345000 audit[4187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd1484968 a3=0 items=0 ppid=4155 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.345000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:20:12.346000 audit: BPF prog-id=179 op=LOAD Dec 12 17:20:12.346000 audit[4187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1484848 a2=74 a3=95 items=0 ppid=4155 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.346000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:20:12.346000 audit: BPF prog-id=179 op=UNLOAD Dec 12 17:20:12.346000 audit[4187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4155 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.346000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:20:12.346000 audit: BPF prog-id=180 op=LOAD Dec 12 17:20:12.346000 audit[4187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1484878 a2=40 a3=ffffd14848a8 items=0 ppid=4155 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.346000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:20:12.346000 audit: BPF prog-id=180 op=UNLOAD Dec 12 17:20:12.346000 audit[4187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd14848a8 items=0 ppid=4155 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.346000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:20:12.348000 audit: BPF prog-id=181 op=LOAD Dec 12 17:20:12.348000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3204ef8 a2=98 a3=ffffc3204ee8 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.348000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.349000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:20:12.349000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc3204ec8 a3=0 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.349000 audit: BPF prog-id=182 op=LOAD Dec 12 17:20:12.349000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc3204b88 a2=74 a3=95 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.349000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:20:12.349000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.349000 audit: BPF prog-id=183 op=LOAD Dec 12 17:20:12.349000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc3204be8 a2=94 a3=2 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.349000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:20:12.349000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.460000 audit: BPF prog-id=184 op=LOAD Dec 12 17:20:12.460000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc3204ba8 a2=40 a3=ffffc3204bd8 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.460000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.460000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:20:12.460000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc3204bd8 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.460000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.477000 audit: BPF prog-id=185 op=LOAD Dec 12 17:20:12.477000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc3204bb8 a2=94 a3=4 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.477000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:20:12.477000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.477000 audit: BPF prog-id=186 op=LOAD Dec 12 17:20:12.477000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc32049f8 a2=94 a3=5 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.477000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:20:12.477000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.477000 audit: BPF prog-id=187 op=LOAD Dec 12 17:20:12.477000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc3204c28 a2=94 a3=6 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.477000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:20:12.477000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.478000 audit: BPF prog-id=188 op=LOAD Dec 12 17:20:12.478000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc32043f8 a2=94 a3=83 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.478000 audit: BPF prog-id=189 op=LOAD Dec 12 17:20:12.478000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc32041b8 a2=94 a3=2 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.478000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:20:12.478000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.478000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:20:12.478000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2a138620 a3=2a12bb00 items=0 ppid=4155 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:20:12.490000 audit: BPF prog-id=190 op=LOAD Dec 12 17:20:12.490000 audit[4219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed13a148 a2=98 a3=ffffed13a138 items=0 ppid=4155 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.490000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:20:12.490000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:20:12.490000 audit[4219]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffed13a118 a3=0 items=0 ppid=4155 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.490000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:20:12.490000 audit: BPF prog-id=191 op=LOAD Dec 12 17:20:12.490000 audit[4219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed139ff8 a2=74 a3=95 items=0 ppid=4155 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.490000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:20:12.490000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:20:12.490000 audit[4219]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4155 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.490000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:20:12.490000 audit: BPF prog-id=192 op=LOAD Dec 12 17:20:12.490000 audit[4219]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed13a028 a2=40 a3=ffffed13a058 items=0 ppid=4155 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.490000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:20:12.490000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:20:12.490000 audit[4219]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffed13a058 items=0 ppid=4155 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.490000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:20:12.513297 containerd[1581]: time="2025-12-12T17:20:12.513117206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nj7gb,Uid:f493074d-c6eb-434c-b64e-346bcf34db0d,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:12.667991 systemd-networkd[1497]: vxlan.calico: Link UP Dec 12 17:20:12.667998 systemd-networkd[1497]: vxlan.calico: Gained carrier Dec 12 17:20:12.686502 kubelet[2754]: E1212 17:20:12.686452 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:12.693000 audit: BPF prog-id=193 op=LOAD Dec 12 17:20:12.693000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd66e95b8 a2=98 a3=ffffd66e95a8 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.693000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd66e9588 a3=0 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=194 op=LOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd66e9298 a2=74 a3=95 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=195 op=LOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd66e92f8 a2=94 a3=2 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=196 op=LOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd66e9178 a2=40 a3=ffffd66e91a8 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd66e91a8 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=197 op=LOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd66e92c8 a2=94 a3=b7 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.694000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:20:12.694000 audit[4260]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.694000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.696000 audit: BPF prog-id=198 op=LOAD Dec 12 17:20:12.696000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd66e8978 a2=94 a3=2 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.696000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:20:12.696000 audit[4260]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.696000 audit: BPF prog-id=199 op=LOAD Dec 12 17:20:12.696000 audit[4260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd66e8b08 a2=94 a3=30 items=0 ppid=4155 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:20:12.701000 audit: BPF prog-id=200 op=LOAD Dec 12 17:20:12.701000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdae35438 a2=98 a3=ffffdae35428 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.701000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.701000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:20:12.701000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdae35408 a3=0 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.701000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.702000 audit: BPF prog-id=201 op=LOAD Dec 12 17:20:12.702000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdae350c8 a2=74 a3=95 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.702000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.702000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:20:12.702000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.702000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.702000 audit: BPF prog-id=202 op=LOAD Dec 12 17:20:12.702000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdae35128 a2=94 a3=2 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.702000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.702000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:20:12.702000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.702000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.810000 audit: BPF prog-id=203 op=LOAD Dec 12 17:20:12.810000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdae350e8 a2=40 a3=ffffdae35118 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.810000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:20:12.810000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdae35118 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.820724 systemd-networkd[1497]: calie5fc17ae28d: Link UP Dec 12 17:20:12.821181 systemd-networkd[1497]: calie5fc17ae28d: Gained carrier Dec 12 17:20:12.825000 audit: BPF prog-id=204 op=LOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdae350f8 a2=94 a3=4 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.825000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.825000 audit: BPF prog-id=205 op=LOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdae34f38 a2=94 a3=5 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.825000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.825000 audit: BPF prog-id=206 op=LOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdae35168 a2=94 a3=6 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.825000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.825000 audit: BPF prog-id=207 op=LOAD Dec 12 17:20:12.825000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdae34938 a2=94 a3=83 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.826000 audit: BPF prog-id=208 op=LOAD Dec 12 17:20:12.826000 audit[4264]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdae346f8 a2=94 a3=2 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.826000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.826000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:20:12.826000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.826000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.826000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:20:12.826000 audit[4264]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3aada620 a3=3aacdb00 items=0 ppid=4155 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.826000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:20:12.836000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:20:12.836000 audit[4155]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000d4e3c0 a2=0 a3=0 items=0 ppid=3956 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.836000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:20:12.838607 containerd[1581]: 2025-12-12 17:20:12.730 [INFO][4239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nj7gb-eth0 goldmane-666569f655- calico-system f493074d-c6eb-434c-b64e-346bcf34db0d 834 0 2025-12-12 17:19:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nj7gb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie5fc17ae28d [] [] }} ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-" Dec 12 17:20:12.838607 containerd[1581]: 2025-12-12 17:20:12.730 [INFO][4239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.838607 containerd[1581]: 2025-12-12 17:20:12.762 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" HandleID="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Workload="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.762 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" HandleID="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Workload="localhost-k8s-goldmane--666569f655--nj7gb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ddbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nj7gb", "timestamp":"2025-12-12 17:20:12.762022162 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.762 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.762 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.762 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.777 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" host="localhost" Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.783 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.790 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.793 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.796 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:12.838787 containerd[1581]: 2025-12-12 17:20:12.796 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" host="localhost" Dec 12 17:20:12.839030 containerd[1581]: 2025-12-12 17:20:12.798 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f Dec 12 17:20:12.839030 containerd[1581]: 2025-12-12 17:20:12.805 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" host="localhost" Dec 12 17:20:12.839030 containerd[1581]: 2025-12-12 17:20:12.813 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" host="localhost" Dec 12 17:20:12.839030 containerd[1581]: 2025-12-12 17:20:12.813 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" host="localhost" Dec 12 17:20:12.839030 containerd[1581]: 2025-12-12 17:20:12.813 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:12.839030 containerd[1581]: 2025-12-12 17:20:12.813 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" HandleID="k8s-pod-network.a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Workload="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.839164 containerd[1581]: 2025-12-12 17:20:12.815 [INFO][4239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nj7gb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f493074d-c6eb-434c-b64e-346bcf34db0d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nj7gb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie5fc17ae28d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:12.839164 containerd[1581]: 2025-12-12 17:20:12.815 [INFO][4239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.839291 containerd[1581]: 2025-12-12 17:20:12.815 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5fc17ae28d ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.839291 containerd[1581]: 2025-12-12 17:20:12.820 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.839340 containerd[1581]: 2025-12-12 17:20:12.821 [INFO][4239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nj7gb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f493074d-c6eb-434c-b64e-346bcf34db0d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f", Pod:"goldmane-666569f655-nj7gb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie5fc17ae28d", MAC:"9a:47:2a:bb:89:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:12.839392 containerd[1581]: 2025-12-12 17:20:12.835 [INFO][4239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" Namespace="calico-system" Pod="goldmane-666569f655-nj7gb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nj7gb-eth0" Dec 12 17:20:12.884589 containerd[1581]: time="2025-12-12T17:20:12.884250599Z" level=info msg="connecting to shim a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f" address="unix:///run/containerd/s/fce9546f9f355767bb9f00e2c3d463854516f6d7acdfb620ec3f6f137aee76cb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:12.890000 audit[4328]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:12.890000 audit[4328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffca9fd430 a2=0 a3=ffffaa42afa8 items=0 ppid=4155 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.890000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:12.897000 audit[4332]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4332 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:12.897000 audit[4332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd962d0d0 a2=0 a3=ffff88037fa8 items=0 ppid=4155 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.897000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:12.901000 audit[4327]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:12.901000 audit[4327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcaf8c4c0 a2=0 a3=ffff8d24efa8 items=0 ppid=4155 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.901000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:12.924795 systemd[1]: Started cri-containerd-a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f.scope - libcontainer container a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f. Dec 12 17:20:12.909000 audit[4337]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:12.909000 audit[4337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe1afb4b0 a2=0 a3=ffff962e9fa8 items=0 ppid=4155 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.909000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:12.946000 audit: BPF prog-id=209 op=LOAD Dec 12 17:20:12.946000 audit: BPF prog-id=210 op=LOAD Dec 12 17:20:12.946000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.946000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:20:12.946000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.946000 audit: BPF prog-id=211 op=LOAD Dec 12 17:20:12.946000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.947000 audit: BPF prog-id=212 op=LOAD Dec 12 17:20:12.947000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.947000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:20:12.947000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.947000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:20:12.947000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.947000 audit: BPF prog-id=213 op=LOAD Dec 12 17:20:12.947000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=4312 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133396462663564353430613966626139633532376663613263636663 Dec 12 17:20:12.948664 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:12.958000 audit[4366]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4366 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:12.958000 audit[4366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffcac398f0 a2=0 a3=ffffbeac4fa8 items=0 ppid=4155 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:12.958000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:12.973780 containerd[1581]: time="2025-12-12T17:20:12.973729855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nj7gb,Uid:f493074d-c6eb-434c-b64e-346bcf34db0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a39dbf5d540a9fba9c527fca2ccfc92da2661351ea3622668a68397d49b8282f\"" Dec 12 17:20:12.975432 containerd[1581]: time="2025-12-12T17:20:12.975388496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:20:13.172262 containerd[1581]: time="2025-12-12T17:20:13.172210013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:13.173637 containerd[1581]: time="2025-12-12T17:20:13.173564133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:20:13.173825 containerd[1581]: time="2025-12-12T17:20:13.173654333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:13.174319 kubelet[2754]: E1212 17:20:13.174034 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:20:13.174319 kubelet[2754]: E1212 17:20:13.174089 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:20:13.174319 kubelet[2754]: E1212 17:20:13.174249 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc9tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nj7gb_calico-system(f493074d-c6eb-434c-b64e-346bcf34db0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:13.175602 kubelet[2754]: E1212 17:20:13.175478 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:13.511990 containerd[1581]: time="2025-12-12T17:20:13.511700852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-5xrgw,Uid:f35c0998-e01c-46ee-bdc1-a591da003d92,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:20:13.512090 containerd[1581]: time="2025-12-12T17:20:13.511711172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b47d77d-m8dkw,Uid:2ac1174a-7255-43cd-9145-6ba385a2a343,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:13.693019 kubelet[2754]: E1212 17:20:13.692894 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:13.718738 systemd-networkd[1497]: cali8be661c4a34: Link UP Dec 12 17:20:13.719007 systemd-networkd[1497]: cali8be661c4a34: Gained carrier Dec 12 17:20:13.736000 audit[4425]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:13.736000 audit[4425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff1359cb0 a2=0 a3=1 items=0 ppid=2913 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:13.739181 containerd[1581]: 2025-12-12 17:20:13.613 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0 calico-apiserver-84646c749c- calico-apiserver f35c0998-e01c-46ee-bdc1-a591da003d92 835 0 2025-12-12 17:19:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84646c749c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84646c749c-5xrgw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8be661c4a34 [] [] }} ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-" Dec 12 17:20:13.739181 containerd[1581]: 2025-12-12 17:20:13.615 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.739181 containerd[1581]: 2025-12-12 17:20:13.654 [INFO][4404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" HandleID="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Workload="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.654 [INFO][4404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" HandleID="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Workload="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84646c749c-5xrgw", "timestamp":"2025-12-12 17:20:13.654734577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.655 [INFO][4404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.655 [INFO][4404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.655 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.666 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" host="localhost" Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.674 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.681 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.684 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.688 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:13.739727 containerd[1581]: 2025-12-12 17:20:13.688 [INFO][4404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" host="localhost" Dec 12 17:20:13.740018 containerd[1581]: 2025-12-12 17:20:13.691 [INFO][4404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c Dec 12 17:20:13.740018 containerd[1581]: 2025-12-12 17:20:13.700 [INFO][4404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" host="localhost" Dec 12 17:20:13.740018 containerd[1581]: 2025-12-12 17:20:13.710 [INFO][4404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" host="localhost" Dec 12 17:20:13.740018 containerd[1581]: 2025-12-12 17:20:13.711 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" host="localhost" Dec 12 17:20:13.740018 containerd[1581]: 2025-12-12 17:20:13.711 [INFO][4404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:13.740018 containerd[1581]: 2025-12-12 17:20:13.711 [INFO][4404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" HandleID="k8s-pod-network.1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Workload="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.740159 containerd[1581]: 2025-12-12 17:20:13.715 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0", GenerateName:"calico-apiserver-84646c749c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f35c0998-e01c-46ee-bdc1-a591da003d92", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84646c749c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84646c749c-5xrgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8be661c4a34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:13.740209 containerd[1581]: 2025-12-12 17:20:13.715 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.740209 containerd[1581]: 2025-12-12 17:20:13.716 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8be661c4a34 ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.740209 containerd[1581]: 2025-12-12 17:20:13.719 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.740282 containerd[1581]: 2025-12-12 17:20:13.720 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0", GenerateName:"calico-apiserver-84646c749c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f35c0998-e01c-46ee-bdc1-a591da003d92", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84646c749c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c", Pod:"calico-apiserver-84646c749c-5xrgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8be661c4a34", MAC:"3e:53:4c:71:3f:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:13.740335 containerd[1581]: 2025-12-12 17:20:13.734 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-5xrgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--5xrgw-eth0" Dec 12 17:20:13.742000 audit[4425]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:13.742000 audit[4425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff1359cb0 a2=0 a3=1 items=0 ppid=2913 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:13.760000 audit[4433]: NETFILTER_CFG table=filter:130 family=2 entries=54 op=nft_register_chain pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:13.760000 audit[4433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffc3a06f50 a2=0 a3=ffffba5d6fa8 items=0 ppid=4155 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.760000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:13.786202 containerd[1581]: time="2025-12-12T17:20:13.786107334Z" level=info msg="connecting to shim 1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c" address="unix:///run/containerd/s/e21a7651ccc9ccdba2e7e09f66753a2f1271f7c5c35856b5337bde636c082bce" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:13.835116 systemd[1]: Started cri-containerd-1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c.scope - libcontainer container 1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c. Dec 12 17:20:13.840922 systemd-networkd[1497]: cali65904607259: Link UP Dec 12 17:20:13.841624 systemd-networkd[1497]: cali65904607259: Gained carrier Dec 12 17:20:13.861621 containerd[1581]: 2025-12-12 17:20:13.623 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0 calico-kube-controllers-578b47d77d- calico-system 2ac1174a-7255-43cd-9145-6ba385a2a343 832 0 2025-12-12 17:19:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:578b47d77d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-578b47d77d-m8dkw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali65904607259 [] [] }} ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-" Dec 12 17:20:13.861621 containerd[1581]: 2025-12-12 17:20:13.623 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.861621 containerd[1581]: 2025-12-12 17:20:13.659 [INFO][4410] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" HandleID="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Workload="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.660 [INFO][4410] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" HandleID="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Workload="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-578b47d77d-m8dkw", "timestamp":"2025-12-12 17:20:13.65981054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.660 [INFO][4410] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.711 [INFO][4410] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.713 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.773 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" host="localhost" Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.783 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.791 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.797 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.802 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:13.862144 containerd[1581]: 2025-12-12 17:20:13.803 [INFO][4410] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" host="localhost" Dec 12 17:20:13.862917 containerd[1581]: 2025-12-12 17:20:13.806 [INFO][4410] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f Dec 12 17:20:13.862917 containerd[1581]: 2025-12-12 17:20:13.815 [INFO][4410] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" host="localhost" Dec 12 17:20:13.862917 containerd[1581]: 2025-12-12 17:20:13.825 [INFO][4410] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" host="localhost" Dec 12 17:20:13.862917 containerd[1581]: 2025-12-12 17:20:13.825 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" host="localhost" Dec 12 17:20:13.862917 containerd[1581]: 2025-12-12 17:20:13.826 [INFO][4410] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:13.862917 containerd[1581]: 2025-12-12 17:20:13.826 [INFO][4410] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" HandleID="k8s-pod-network.7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Workload="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.863046 containerd[1581]: 2025-12-12 17:20:13.830 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0", GenerateName:"calico-kube-controllers-578b47d77d-", Namespace:"calico-system", SelfLink:"", UID:"2ac1174a-7255-43cd-9145-6ba385a2a343", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"578b47d77d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-578b47d77d-m8dkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali65904607259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:13.863095 containerd[1581]: 2025-12-12 17:20:13.830 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.863095 containerd[1581]: 2025-12-12 17:20:13.830 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65904607259 ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.863095 containerd[1581]: 2025-12-12 17:20:13.842 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.864691 containerd[1581]: 2025-12-12 17:20:13.843 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0", GenerateName:"calico-kube-controllers-578b47d77d-", Namespace:"calico-system", SelfLink:"", UID:"2ac1174a-7255-43cd-9145-6ba385a2a343", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"578b47d77d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f", Pod:"calico-kube-controllers-578b47d77d-m8dkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali65904607259", MAC:"62:ef:ea:29:f0:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:13.864792 containerd[1581]: 2025-12-12 17:20:13.858 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" Namespace="calico-system" Pod="calico-kube-controllers-578b47d77d-m8dkw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--578b47d77d--m8dkw-eth0" Dec 12 17:20:13.867000 audit: BPF prog-id=214 op=LOAD Dec 12 17:20:13.868000 audit: BPF prog-id=215 op=LOAD Dec 12 17:20:13.868000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.868000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:20:13.868000 audit[4453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.868000 audit: BPF prog-id=216 op=LOAD Dec 12 17:20:13.868000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.869000 audit: BPF prog-id=217 op=LOAD Dec 12 17:20:13.869000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.869000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:20:13.869000 audit[4453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.869000 audit: BPF prog-id=216 op=UNLOAD Dec 12 17:20:13.869000 audit[4453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.869000 audit: BPF prog-id=218 op=LOAD Dec 12 17:20:13.869000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4442 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313361646366663366363635653863363830623839383136643664 Dec 12 17:20:13.871586 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:13.886000 audit[4479]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4479 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:13.886000 audit[4479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21952 a0=3 a1=ffffe61d6bb0 a2=0 a3=ffff84906fa8 items=0 ppid=4155 pid=4479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.886000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:13.904638 containerd[1581]: time="2025-12-12T17:20:13.904475403Z" level=info msg="connecting to shim 7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f" address="unix:///run/containerd/s/13944249e3f20979fc542d1fa43f867b462e2044e1187227da3bc7e6cca9d024" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:13.913885 systemd-networkd[1497]: calie5fc17ae28d: Gained IPv6LL Dec 12 17:20:13.928328 containerd[1581]: time="2025-12-12T17:20:13.928282657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-5xrgw,Uid:f35c0998-e01c-46ee-bdc1-a591da003d92,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1a13adcff3f665e8c680b89816d6d6f2ad4f7426f321a7f3b6167443554aa10c\"" Dec 12 17:20:13.931535 containerd[1581]: time="2025-12-12T17:20:13.931182019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:20:13.946864 systemd[1]: Started cri-containerd-7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f.scope - libcontainer container 7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f. Dec 12 17:20:13.959000 audit: BPF prog-id=219 op=LOAD Dec 12 17:20:13.959000 audit: BPF prog-id=220 op=LOAD Dec 12 17:20:13.959000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.960000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:20:13.960000 audit[4507]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.960000 audit: BPF prog-id=221 op=LOAD Dec 12 17:20:13.960000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.960000 audit: BPF prog-id=222 op=LOAD Dec 12 17:20:13.960000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.960000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:20:13.960000 audit[4507]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.961000 audit: BPF prog-id=221 op=UNLOAD Dec 12 17:20:13.961000 audit[4507]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.961000 audit: BPF prog-id=223 op=LOAD Dec 12 17:20:13.961000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4490 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:13.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764663439646336616565613063646231656162363634663233323235 Dec 12 17:20:13.962407 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:13.986615 containerd[1581]: time="2025-12-12T17:20:13.986569132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b47d77d-m8dkw,Uid:2ac1174a-7255-43cd-9145-6ba385a2a343,Namespace:calico-system,Attempt:0,} returns sandbox id \"7df49dc6aeea0cdb1eab664f23225b0e98e8094302e078ebd0a0e4fa6b80878f\"" Dec 12 17:20:14.130132 containerd[1581]: time="2025-12-12T17:20:14.129913851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:14.144069 containerd[1581]: time="2025-12-12T17:20:14.144006499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:20:14.144330 containerd[1581]: time="2025-12-12T17:20:14.144085739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:14.144519 kubelet[2754]: E1212 17:20:14.144467 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:14.144574 kubelet[2754]: E1212 17:20:14.144552 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:14.145222 kubelet[2754]: E1212 17:20:14.144787 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84646c749c-5xrgw_calico-apiserver(f35c0998-e01c-46ee-bdc1-a591da003d92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:14.145387 containerd[1581]: time="2025-12-12T17:20:14.144929820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:20:14.146837 kubelet[2754]: E1212 17:20:14.146787 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:14.377813 containerd[1581]: time="2025-12-12T17:20:14.377749788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:14.379135 containerd[1581]: time="2025-12-12T17:20:14.379081749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:20:14.379135 containerd[1581]: time="2025-12-12T17:20:14.379163709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:14.379464 kubelet[2754]: E1212 17:20:14.379379 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:20:14.379464 kubelet[2754]: E1212 17:20:14.379438 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:20:14.379845 kubelet[2754]: E1212 17:20:14.379598 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfzgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-578b47d77d-m8dkw_calico-system(2ac1174a-7255-43cd-9145-6ba385a2a343): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:14.380830 kubelet[2754]: E1212 17:20:14.380727 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:14.489668 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Dec 12 17:20:14.510436 kubelet[2754]: E1212 17:20:14.510383 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:14.511964 containerd[1581]: time="2025-12-12T17:20:14.511930382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-785fj,Uid:58bcfb02-e7ab-42c9-aa93-1365df71ae6d,Namespace:kube-system,Attempt:0,}" Dec 12 17:20:14.597432 kubelet[2754]: I1212 17:20:14.597259 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:20:14.597773 kubelet[2754]: E1212 17:20:14.597749 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:14.645420 systemd-networkd[1497]: calie6abe08521e: Link UP Dec 12 17:20:14.646697 systemd-networkd[1497]: calie6abe08521e: Gained carrier Dec 12 17:20:14.662629 containerd[1581]: 2025-12-12 17:20:14.559 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--785fj-eth0 coredns-674b8bbfcf- kube-system 58bcfb02-e7ab-42c9-aa93-1365df71ae6d 831 0 2025-12-12 17:19:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-785fj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie6abe08521e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-" Dec 12 17:20:14.662629 containerd[1581]: 2025-12-12 17:20:14.560 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.662629 containerd[1581]: 2025-12-12 17:20:14.589 [INFO][4549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" HandleID="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Workload="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.589 [INFO][4549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" HandleID="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Workload="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3020), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-785fj", "timestamp":"2025-12-12 17:20:14.589211625 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.589 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.589 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.589 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.601 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" host="localhost" Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.609 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.616 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.620 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.623 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:14.662954 containerd[1581]: 2025-12-12 17:20:14.623 [INFO][4549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" host="localhost" Dec 12 17:20:14.663198 containerd[1581]: 2025-12-12 17:20:14.628 [INFO][4549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5 Dec 12 17:20:14.663198 containerd[1581]: 2025-12-12 17:20:14.632 [INFO][4549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" host="localhost" Dec 12 17:20:14.663198 containerd[1581]: 2025-12-12 17:20:14.640 [INFO][4549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" host="localhost" Dec 12 17:20:14.663198 containerd[1581]: 2025-12-12 17:20:14.640 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" host="localhost" Dec 12 17:20:14.663198 containerd[1581]: 2025-12-12 17:20:14.640 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:14.663198 containerd[1581]: 2025-12-12 17:20:14.640 [INFO][4549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" HandleID="k8s-pod-network.4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Workload="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.663371 containerd[1581]: 2025-12-12 17:20:14.642 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--785fj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"58bcfb02-e7ab-42c9-aa93-1365df71ae6d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-785fj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6abe08521e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:14.663434 containerd[1581]: 2025-12-12 17:20:14.643 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.663434 containerd[1581]: 2025-12-12 17:20:14.643 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6abe08521e ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.663434 containerd[1581]: 2025-12-12 17:20:14.646 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.663503 containerd[1581]: 2025-12-12 17:20:14.647 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--785fj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"58bcfb02-e7ab-42c9-aa93-1365df71ae6d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5", Pod:"coredns-674b8bbfcf-785fj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6abe08521e", MAC:"aa:1f:5e:85:16:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:14.663503 containerd[1581]: 2025-12-12 17:20:14.659 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-785fj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--785fj-eth0" Dec 12 17:20:14.683000 audit[4582]: NETFILTER_CFG table=filter:132 family=2 entries=60 op=nft_register_chain pid=4582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:14.683000 audit[4582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28968 a0=3 a1=ffffe8602330 a2=0 a3=ffffb3ab2fa8 items=0 ppid=4155 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.683000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:14.702121 kubelet[2754]: E1212 17:20:14.702065 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:14.707195 kubelet[2754]: E1212 17:20:14.707149 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:14.707631 kubelet[2754]: E1212 17:20:14.707600 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:14.736350 containerd[1581]: time="2025-12-12T17:20:14.735748746Z" level=info msg="connecting to shim 4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5" address="unix:///run/containerd/s/ef7499d47e2a52368c24a9894db2eac381990bfe39e82c5048b1c214957f0f02" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:14.757000 audit[4620]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:14.757000 audit[4620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd88d0c30 a2=0 a3=1 items=0 ppid=2913 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:14.762000 audit[4620]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:14.762000 audit[4620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd88d0c30 a2=0 a3=1 items=0 ppid=2913 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:14.774799 systemd[1]: Started cri-containerd-4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5.scope - libcontainer container 4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5. Dec 12 17:20:14.801000 audit: BPF prog-id=224 op=LOAD Dec 12 17:20:14.802000 audit: BPF prog-id=225 op=LOAD Dec 12 17:20:14.802000 audit[4614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.802000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:20:14.802000 audit[4614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.803000 audit: BPF prog-id=226 op=LOAD Dec 12 17:20:14.803000 audit[4614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.803000 audit: BPF prog-id=227 op=LOAD Dec 12 17:20:14.803000 audit[4614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.803000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:20:14.803000 audit[4614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.803000 audit: BPF prog-id=226 op=UNLOAD Dec 12 17:20:14.803000 audit[4614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.803000 audit: BPF prog-id=228 op=LOAD Dec 12 17:20:14.803000 audit[4614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4601 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666136656134326339386135656437656135343932663731636231 Dec 12 17:20:14.804565 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:14.837972 kubelet[2754]: E1212 17:20:14.837916 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:14.842833 containerd[1581]: time="2025-12-12T17:20:14.842451965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-785fj,Uid:58bcfb02-e7ab-42c9-aa93-1365df71ae6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5\"" Dec 12 17:20:14.859016 kubelet[2754]: E1212 17:20:14.858980 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:14.865706 containerd[1581]: time="2025-12-12T17:20:14.865663817Z" level=info msg="CreateContainer within sandbox \"4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:20:14.873856 systemd-networkd[1497]: cali65904607259: Gained IPv6LL Dec 12 17:20:14.880685 containerd[1581]: time="2025-12-12T17:20:14.880638266Z" level=info msg="Container 923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:20:14.886789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593359723.mount: Deactivated successfully. Dec 12 17:20:14.890547 containerd[1581]: time="2025-12-12T17:20:14.890450071Z" level=info msg="CreateContainer within sandbox \"4ffa6ea42c98a5ed7ea5492f71cb1065a9505107b3ffd88a7d37090de20b78f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f\"" Dec 12 17:20:14.891486 containerd[1581]: time="2025-12-12T17:20:14.891438592Z" level=info msg="StartContainer for \"923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f\"" Dec 12 17:20:14.892990 containerd[1581]: time="2025-12-12T17:20:14.892696472Z" level=info msg="connecting to shim 923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f" address="unix:///run/containerd/s/ef7499d47e2a52368c24a9894db2eac381990bfe39e82c5048b1c214957f0f02" protocol=ttrpc version=3 Dec 12 17:20:14.918769 systemd[1]: Started cri-containerd-923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f.scope - libcontainer container 923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f. Dec 12 17:20:14.933000 audit: BPF prog-id=229 op=LOAD Dec 12 17:20:14.934000 audit: BPF prog-id=230 op=LOAD Dec 12 17:20:14.934000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.934000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:20:14.934000 audit[4667]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.935000 audit: BPF prog-id=231 op=LOAD Dec 12 17:20:14.935000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.935000 audit: BPF prog-id=232 op=LOAD Dec 12 17:20:14.935000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.935000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:20:14.935000 audit[4667]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.935000 audit: BPF prog-id=231 op=UNLOAD Dec 12 17:20:14.935000 audit[4667]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.935000 audit: BPF prog-id=233 op=LOAD Dec 12 17:20:14.935000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4601 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932336536363865343232336661356264656362343235663935386633 Dec 12 17:20:14.956821 containerd[1581]: time="2025-12-12T17:20:14.956491867Z" level=info msg="StartContainer for \"923e668e4223fa5bdecb425f958f3d3df7555b717fe1f92a7613c556109d945f\" returns successfully" Dec 12 17:20:15.511195 containerd[1581]: time="2025-12-12T17:20:15.511141796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p27qt,Uid:9c455fe7-f7d2-456e-ac64-f3619ba04a75,Namespace:calico-system,Attempt:0,}" Dec 12 17:20:15.659810 systemd-networkd[1497]: cali27d0ff625b6: Link UP Dec 12 17:20:15.660639 systemd-networkd[1497]: cali27d0ff625b6: Gained carrier Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.568 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--p27qt-eth0 csi-node-driver- calico-system 9c455fe7-f7d2-456e-ac64-f3619ba04a75 740 0 2025-12-12 17:19:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-p27qt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali27d0ff625b6 [] [] }} ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.568 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.597 [INFO][4719] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" HandleID="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Workload="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.597 [INFO][4719] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" HandleID="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Workload="localhost-k8s-csi--node--driver--p27qt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a7160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-p27qt", "timestamp":"2025-12-12 17:20:15.59705472 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.597 [INFO][4719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.597 [INFO][4719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.597 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.608 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.614 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.621 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.624 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.627 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.627 [INFO][4719] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.629 [INFO][4719] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.640 [INFO][4719] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.653 [INFO][4719] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.654 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" host="localhost" Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.654 [INFO][4719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:15.678702 containerd[1581]: 2025-12-12 17:20:15.654 [INFO][4719] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" HandleID="k8s-pod-network.60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Workload="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.679310 containerd[1581]: 2025-12-12 17:20:15.656 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p27qt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c455fe7-f7d2-456e-ac64-f3619ba04a75", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-p27qt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27d0ff625b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:15.679310 containerd[1581]: 2025-12-12 17:20:15.656 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.679310 containerd[1581]: 2025-12-12 17:20:15.656 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27d0ff625b6 ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.679310 containerd[1581]: 2025-12-12 17:20:15.661 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.679310 containerd[1581]: 2025-12-12 17:20:15.661 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p27qt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c455fe7-f7d2-456e-ac64-f3619ba04a75", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b", Pod:"csi-node-driver-p27qt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27d0ff625b6", MAC:"1a:d6:94:32:0a:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:15.679310 containerd[1581]: 2025-12-12 17:20:15.675 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" Namespace="calico-system" Pod="csi-node-driver-p27qt" WorkloadEndpoint="localhost-k8s-csi--node--driver--p27qt-eth0" Dec 12 17:20:15.691000 audit[4736]: NETFILTER_CFG table=filter:135 family=2 entries=48 op=nft_register_chain pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:15.691000 audit[4736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffc5cd7690 a2=0 a3=ffffacf51fa8 items=0 ppid=4155 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.691000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:15.703262 containerd[1581]: time="2025-12-12T17:20:15.703110735Z" level=info msg="connecting to shim 60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b" address="unix:///run/containerd/s/a6780de679cc44e8f19f6ef83cd05831a9f143933b0efcd4764f7979ee3df6be" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:15.710548 kubelet[2754]: E1212 17:20:15.710479 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:15.712413 kubelet[2754]: E1212 17:20:15.712369 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:15.712939 kubelet[2754]: E1212 17:20:15.712856 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:15.735798 systemd[1]: Started cri-containerd-60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b.scope - libcontainer container 60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b. Dec 12 17:20:15.753000 audit: BPF prog-id=234 op=LOAD Dec 12 17:20:15.754000 audit: BPF prog-id=235 op=LOAD Dec 12 17:20:15.754000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.754000 audit: BPF prog-id=235 op=UNLOAD Dec 12 17:20:15.754000 audit[4757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.755000 audit: BPF prog-id=236 op=LOAD Dec 12 17:20:15.755000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.755000 audit: BPF prog-id=237 op=LOAD Dec 12 17:20:15.755000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.755000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:20:15.755000 audit[4757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.755000 audit: BPF prog-id=236 op=UNLOAD Dec 12 17:20:15.755000 audit[4757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.755000 audit: BPF prog-id=238 op=LOAD Dec 12 17:20:15.755000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633934396661343465343639623833643730616633396433316130 Dec 12 17:20:15.758819 kubelet[2754]: I1212 17:20:15.758002 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-785fj" podStartSLOduration=38.757982044 podStartE2EDuration="38.757982044s" podCreationTimestamp="2025-12-12 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:20:15.757712243 +0000 UTC m=+44.340844330" watchObservedRunningTime="2025-12-12 17:20:15.757982044 +0000 UTC m=+44.341114091" Dec 12 17:20:15.759425 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:15.769671 systemd-networkd[1497]: cali8be661c4a34: Gained IPv6LL Dec 12 17:20:15.772000 audit[4778]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:15.772000 audit[4778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffea2d6220 a2=0 a3=1 items=0 ppid=2913 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:15.779000 audit[4778]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:15.779000 audit[4778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffea2d6220 a2=0 a3=1 items=0 ppid=2913 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:15.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:15.790151 containerd[1581]: time="2025-12-12T17:20:15.790109940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p27qt,Uid:9c455fe7-f7d2-456e-ac64-f3619ba04a75,Namespace:calico-system,Attempt:0,} returns sandbox id \"60c949fa44e469b83d70af39d31a06726538b47f8a2848ee3eb0705ba9c9e48b\"" Dec 12 17:20:15.792996 containerd[1581]: time="2025-12-12T17:20:15.792959902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:20:16.015364 containerd[1581]: time="2025-12-12T17:20:16.015184616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:16.016524 containerd[1581]: time="2025-12-12T17:20:16.016448337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:20:16.016524 containerd[1581]: time="2025-12-12T17:20:16.016486257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:16.016746 kubelet[2754]: E1212 17:20:16.016703 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:20:16.016792 kubelet[2754]: E1212 17:20:16.016757 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:20:16.017757 kubelet[2754]: E1212 17:20:16.017657 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:16.019649 containerd[1581]: time="2025-12-12T17:20:16.019617698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:20:16.232286 containerd[1581]: time="2025-12-12T17:20:16.232133001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:16.233298 containerd[1581]: time="2025-12-12T17:20:16.233240362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:20:16.233376 containerd[1581]: time="2025-12-12T17:20:16.233338642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:16.233703 kubelet[2754]: E1212 17:20:16.233527 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:20:16.233703 kubelet[2754]: E1212 17:20:16.233578 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:20:16.233902 kubelet[2754]: E1212 17:20:16.233865 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:16.235254 kubelet[2754]: E1212 17:20:16.235196 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:16.282411 systemd-networkd[1497]: calie6abe08521e: Gained IPv6LL Dec 12 17:20:16.397499 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:39202.service - OpenSSH per-connection server daemon (10.0.0.1:39202). Dec 12 17:20:16.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.23:22-10.0.0.1:39202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:16.401007 kernel: kauditd_printk_skb: 362 callbacks suppressed Dec 12 17:20:16.401173 kernel: audit: type=1130 audit(1765560016.397:709): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.23:22-10.0.0.1:39202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:16.487000 audit[4785]: USER_ACCT pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.492041 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 39202 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:16.492536 kernel: audit: type=1101 audit(1765560016.487:710): pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.491000 audit[4785]: CRED_ACQ pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.496378 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:16.498557 kernel: audit: type=1103 audit(1765560016.491:711): pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.498629 kernel: audit: type=1006 audit(1765560016.491:712): pid=4785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 17:20:16.491000 audit[4785]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd29eb020 a2=3 a3=0 items=0 ppid=1 pid=4785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.502333 kernel: audit: type=1300 audit(1765560016.491:712): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd29eb020 a2=3 a3=0 items=0 ppid=1 pid=4785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.491000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:16.503726 kernel: audit: type=1327 audit(1765560016.491:712): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:16.506717 systemd-logind[1564]: New session 9 of user core. Dec 12 17:20:16.510947 containerd[1581]: time="2025-12-12T17:20:16.510903336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-wxdfq,Uid:0e2d55e4-7343-4bc1-8a02-a707014e8ced,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:20:16.518733 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:20:16.523000 audit[4785]: USER_START pid=4785 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.528613 kernel: audit: type=1105 audit(1765560016.523:713): pid=4785 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.528731 kernel: audit: type=1103 audit(1765560016.528:714): pid=4799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.528000 audit[4799]: CRED_ACQ pid=4799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.668271 systemd-networkd[1497]: cali426df8dcbb8: Link UP Dec 12 17:20:16.668546 systemd-networkd[1497]: cali426df8dcbb8: Gained carrier Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.566 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0 calico-apiserver-84646c749c- calico-apiserver 0e2d55e4-7343-4bc1-8a02-a707014e8ced 838 0 2025-12-12 17:19:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84646c749c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84646c749c-wxdfq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali426df8dcbb8 [] [] }} ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.566 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.601 [INFO][4811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" HandleID="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Workload="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.601 [INFO][4811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" HandleID="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Workload="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84646c749c-wxdfq", "timestamp":"2025-12-12 17:20:16.60142262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.601 [INFO][4811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.601 [INFO][4811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.601 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.616 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.626 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.637 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.640 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.643 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.643 [INFO][4811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.645 [INFO][4811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.651 [INFO][4811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.661 [INFO][4811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.661 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" host="localhost" Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.661 [INFO][4811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:16.689578 containerd[1581]: 2025-12-12 17:20:16.661 [INFO][4811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" HandleID="k8s-pod-network.206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Workload="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.690450 containerd[1581]: 2025-12-12 17:20:16.664 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0", GenerateName:"calico-apiserver-84646c749c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e2d55e4-7343-4bc1-8a02-a707014e8ced", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84646c749c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84646c749c-wxdfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali426df8dcbb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:16.690450 containerd[1581]: 2025-12-12 17:20:16.664 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.690450 containerd[1581]: 2025-12-12 17:20:16.664 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali426df8dcbb8 ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.690450 containerd[1581]: 2025-12-12 17:20:16.668 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.690450 containerd[1581]: 2025-12-12 17:20:16.668 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0", GenerateName:"calico-apiserver-84646c749c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e2d55e4-7343-4bc1-8a02-a707014e8ced", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84646c749c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b", Pod:"calico-apiserver-84646c749c-wxdfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali426df8dcbb8", MAC:"02:e8:83:42:58:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:16.690450 containerd[1581]: 2025-12-12 17:20:16.685 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" Namespace="calico-apiserver" Pod="calico-apiserver-84646c749c-wxdfq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84646c749c--wxdfq-eth0" Dec 12 17:20:16.708000 audit[4828]: NETFILTER_CFG table=filter:138 family=2 entries=59 op=nft_register_chain pid=4828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:16.708000 audit[4828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29476 a0=3 a1=ffffc053d910 a2=0 a3=ffff9b01ffa8 items=0 ppid=4155 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.714141 kubelet[2754]: E1212 17:20:16.714082 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:16.716356 kernel: audit: type=1325 audit(1765560016.708:715): table=filter:138 family=2 entries=59 op=nft_register_chain pid=4828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:16.716557 kernel: audit: type=1300 audit(1765560016.708:715): arch=c00000b7 syscall=211 success=yes exit=29476 a0=3 a1=ffffc053d910 a2=0 a3=ffff9b01ffa8 items=0 ppid=4155 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.708000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:16.721584 kubelet[2754]: E1212 17:20:16.721300 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:16.738759 containerd[1581]: time="2025-12-12T17:20:16.738646407Z" level=info msg="connecting to shim 206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b" address="unix:///run/containerd/s/ecb01b1ea35c6dad32a4a4fdd7a80923ac6360d2b04aa0edfb1e57c892fc9376" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:16.743213 sshd[4799]: Connection closed by 10.0.0.1 port 39202 Dec 12 17:20:16.743756 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:16.745000 audit[4785]: USER_END pid=4785 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.745000 audit[4785]: CRED_DISP pid=4785 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:16.752376 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:39202.service: Deactivated successfully. Dec 12 17:20:16.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.23:22-10.0.0.1:39202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:16.757282 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:20:16.766653 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:20:16.768000 audit[4853]: NETFILTER_CFG table=filter:139 family=2 entries=17 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:16.768000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffa27e480 a2=0 a3=1 items=0 ppid=2913 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:16.771207 systemd-logind[1564]: Removed session 9. Dec 12 17:20:16.775000 audit[4853]: NETFILTER_CFG table=nat:140 family=2 entries=35 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:16.775000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffa27e480 a2=0 a3=1 items=0 ppid=2913 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:16.786832 systemd[1]: Started cri-containerd-206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b.scope - libcontainer container 206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b. Dec 12 17:20:16.798000 audit: BPF prog-id=239 op=LOAD Dec 12 17:20:16.799000 audit: BPF prog-id=240 op=LOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.799000 audit: BPF prog-id=240 op=UNLOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.799000 audit: BPF prog-id=241 op=LOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.799000 audit: BPF prog-id=242 op=LOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.799000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.799000 audit: BPF prog-id=241 op=UNLOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.799000 audit: BPF prog-id=243 op=LOAD Dec 12 17:20:16.799000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:16.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366164633334393834303261386133353733613431303134343466 Dec 12 17:20:16.801113 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:16.838281 containerd[1581]: time="2025-12-12T17:20:16.838235295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84646c749c-wxdfq,Uid:0e2d55e4-7343-4bc1-8a02-a707014e8ced,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"206adc3498402a8a3573a4101444fded81416c6b37c48da6e6dfcfb195bdbc4b\"" Dec 12 17:20:16.840156 containerd[1581]: time="2025-12-12T17:20:16.840117816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:20:17.062477 containerd[1581]: time="2025-12-12T17:20:17.062421802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:17.063619 containerd[1581]: time="2025-12-12T17:20:17.063501202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:20:17.063619 containerd[1581]: time="2025-12-12T17:20:17.063555883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:17.063822 kubelet[2754]: E1212 17:20:17.063742 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:17.063822 kubelet[2754]: E1212 17:20:17.063810 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:17.064000 kubelet[2754]: E1212 17:20:17.063941 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv4jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84646c749c-wxdfq_calico-apiserver(0e2d55e4-7343-4bc1-8a02-a707014e8ced): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:17.066109 kubelet[2754]: E1212 17:20:17.066066 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:17.511614 kubelet[2754]: E1212 17:20:17.510938 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:17.511793 containerd[1581]: time="2025-12-12T17:20:17.511752286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldm6n,Uid:a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7,Namespace:kube-system,Attempt:0,}" Dec 12 17:20:17.562376 systemd-networkd[1497]: cali27d0ff625b6: Gained IPv6LL Dec 12 17:20:17.653206 systemd-networkd[1497]: calib16999ea8d4: Link UP Dec 12 17:20:17.654321 systemd-networkd[1497]: calib16999ea8d4: Gained carrier Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.554 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0 coredns-674b8bbfcf- kube-system a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7 833 0 2025-12-12 17:19:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ldm6n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib16999ea8d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.554 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.589 [INFO][4895] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" HandleID="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Workload="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.589 [INFO][4895] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" HandleID="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Workload="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcfd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ldm6n", "timestamp":"2025-12-12 17:20:17.589067001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.589 [INFO][4895] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.589 [INFO][4895] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.589 [INFO][4895] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.601 [INFO][4895] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.609 [INFO][4895] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.621 [INFO][4895] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.624 [INFO][4895] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.627 [INFO][4895] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.627 [INFO][4895] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.629 [INFO][4895] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.638 [INFO][4895] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.646 [INFO][4895] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.646 [INFO][4895] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" host="localhost" Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.646 [INFO][4895] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:20:17.673251 containerd[1581]: 2025-12-12 17:20:17.646 [INFO][4895] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" HandleID="k8s-pod-network.8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Workload="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.673782 containerd[1581]: 2025-12-12 17:20:17.650 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ldm6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib16999ea8d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:17.673782 containerd[1581]: 2025-12-12 17:20:17.650 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.673782 containerd[1581]: 2025-12-12 17:20:17.650 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib16999ea8d4 ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.673782 containerd[1581]: 2025-12-12 17:20:17.654 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.673782 containerd[1581]: 2025-12-12 17:20:17.654 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa", Pod:"coredns-674b8bbfcf-ldm6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib16999ea8d4", MAC:"02:e4:b0:0f:a5:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:20:17.673782 containerd[1581]: 2025-12-12 17:20:17.669 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldm6n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldm6n-eth0" Dec 12 17:20:17.702125 containerd[1581]: time="2025-12-12T17:20:17.701477613Z" level=info msg="connecting to shim 8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa" address="unix:///run/containerd/s/17707f106746ee6b43dbf0131254af40ba5a94d0dd5bf7b9b4852ec803fe30ac" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:20:17.704000 audit[4921]: NETFILTER_CFG table=filter:141 family=2 entries=48 op=nft_register_chain pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:20:17.704000 audit[4921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=fffffc186740 a2=0 a3=ffff944f6fa8 items=0 ppid=4155 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:20:17.724764 kubelet[2754]: E1212 17:20:17.724708 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:17.726932 kubelet[2754]: E1212 17:20:17.726497 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:17.727452 kubelet[2754]: E1212 17:20:17.727411 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:17.730876 systemd[1]: Started cri-containerd-8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa.scope - libcontainer container 8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa. Dec 12 17:20:17.748000 audit: BPF prog-id=244 op=LOAD Dec 12 17:20:17.749000 audit: BPF prog-id=245 op=LOAD Dec 12 17:20:17.749000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.750000 audit: BPF prog-id=245 op=UNLOAD Dec 12 17:20:17.750000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.751000 audit: BPF prog-id=246 op=LOAD Dec 12 17:20:17.751000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.751000 audit: BPF prog-id=247 op=LOAD Dec 12 17:20:17.751000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.751000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:20:17.751000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.751000 audit: BPF prog-id=246 op=UNLOAD Dec 12 17:20:17.751000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.751000 audit: BPF prog-id=248 op=LOAD Dec 12 17:20:17.751000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862313162383965326437623664393463366639613962323661623131 Dec 12 17:20:17.754775 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:20:17.775000 audit[4953]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:17.775000 audit[4953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff67ea6b0 a2=0 a3=1 items=0 ppid=2913 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:17.782000 audit[4953]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:17.782000 audit[4953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff67ea6b0 a2=0 a3=1 items=0 ppid=2913 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:17.786329 containerd[1581]: time="2025-12-12T17:20:17.786180531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldm6n,Uid:a725ab6d-a9ea-4de3-a9de-4d649ce6ecf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa\"" Dec 12 17:20:17.787125 kubelet[2754]: E1212 17:20:17.787098 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:17.794587 containerd[1581]: time="2025-12-12T17:20:17.794547375Z" level=info msg="CreateContainer within sandbox \"8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:20:17.810843 containerd[1581]: time="2025-12-12T17:20:17.810740702Z" level=info msg="Container 64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:20:17.819172 containerd[1581]: time="2025-12-12T17:20:17.819121066Z" level=info msg="CreateContainer within sandbox \"8b11b89e2d7b6d94c6f9a9b26ab1165e977c1d76447001efa3f3d89c787daffa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5\"" Dec 12 17:20:17.820791 containerd[1581]: time="2025-12-12T17:20:17.820182946Z" level=info msg="StartContainer for \"64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5\"" Dec 12 17:20:17.821629 containerd[1581]: time="2025-12-12T17:20:17.821592107Z" level=info msg="connecting to shim 64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5" address="unix:///run/containerd/s/17707f106746ee6b43dbf0131254af40ba5a94d0dd5bf7b9b4852ec803fe30ac" protocol=ttrpc version=3 Dec 12 17:20:17.847804 systemd[1]: Started cri-containerd-64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5.scope - libcontainer container 64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5. Dec 12 17:20:17.860000 audit: BPF prog-id=249 op=LOAD Dec 12 17:20:17.861000 audit: BPF prog-id=250 op=LOAD Dec 12 17:20:17.861000 audit[4962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.861000 audit: BPF prog-id=250 op=UNLOAD Dec 12 17:20:17.861000 audit[4962]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.861000 audit: BPF prog-id=251 op=LOAD Dec 12 17:20:17.861000 audit[4962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.862000 audit: BPF prog-id=252 op=LOAD Dec 12 17:20:17.862000 audit[4962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.862000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:20:17.862000 audit[4962]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.862000 audit: BPF prog-id=251 op=UNLOAD Dec 12 17:20:17.862000 audit[4962]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.862000 audit: BPF prog-id=253 op=LOAD Dec 12 17:20:17.862000 audit[4962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4920 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:17.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634616531623533363633626465316464613462633531376433636664 Dec 12 17:20:17.882236 systemd-networkd[1497]: cali426df8dcbb8: Gained IPv6LL Dec 12 17:20:17.888740 containerd[1581]: time="2025-12-12T17:20:17.888576018Z" level=info msg="StartContainer for \"64ae1b53663bde1dda4bc517d3cfdf72df8b308051fbd6c890bc8542d146b3b5\" returns successfully" Dec 12 17:20:18.726246 kubelet[2754]: E1212 17:20:18.726197 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:18.727742 kubelet[2754]: E1212 17:20:18.726917 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:18.728812 kubelet[2754]: E1212 17:20:18.728768 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:18.747901 kubelet[2754]: I1212 17:20:18.747776 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ldm6n" podStartSLOduration=41.747756027 podStartE2EDuration="41.747756027s" podCreationTimestamp="2025-12-12 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:20:18.745941106 +0000 UTC m=+47.329073153" watchObservedRunningTime="2025-12-12 17:20:18.747756027 +0000 UTC m=+47.330888074" Dec 12 17:20:18.778645 systemd-networkd[1497]: calib16999ea8d4: Gained IPv6LL Dec 12 17:20:18.843000 audit[5002]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:18.843000 audit[5002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd19f6cf0 a2=0 a3=1 items=0 ppid=2913 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:18.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:18.850000 audit[5002]: NETFILTER_CFG table=nat:145 family=2 entries=44 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:18.850000 audit[5002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd19f6cf0 a2=0 a3=1 items=0 ppid=2913 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:18.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:18.869000 audit[5004]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:18.869000 audit[5004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe7654a80 a2=0 a3=1 items=0 ppid=2913 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:18.869000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:18.885000 audit[5004]: NETFILTER_CFG table=nat:147 family=2 entries=56 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:18.885000 audit[5004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe7654a80 a2=0 a3=1 items=0 ppid=2913 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:18.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:19.731885 kubelet[2754]: E1212 17:20:19.731853 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:20.734964 kubelet[2754]: E1212 17:20:20.734927 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:21.513767 containerd[1581]: time="2025-12-12T17:20:21.513485529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:20:21.708838 containerd[1581]: time="2025-12-12T17:20:21.708767157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:21.710154 containerd[1581]: time="2025-12-12T17:20:21.709968318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:20:21.710154 containerd[1581]: time="2025-12-12T17:20:21.710052678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:21.710906 kubelet[2754]: E1212 17:20:21.710627 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:20:21.710990 kubelet[2754]: E1212 17:20:21.710951 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:20:21.711286 kubelet[2754]: E1212 17:20:21.711152 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d83762e8f24b14ac1baa2822839bf5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kglt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7847574df-8wxnl_calico-system(62dd324a-6db6-477a-9870-2e631369a8d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:21.718264 containerd[1581]: time="2025-12-12T17:20:21.717854641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:20:21.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.23:22-10.0.0.1:43650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:21.759535 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:43650.service - OpenSSH per-connection server daemon (10.0.0.1:43650). Dec 12 17:20:21.763393 kernel: kauditd_printk_skb: 97 callbacks suppressed Dec 12 17:20:21.763516 kernel: audit: type=1130 audit(1765560021.759:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.23:22-10.0.0.1:43650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:21.832000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.834736 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 43650 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:21.836573 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:21.835000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.844936 kernel: audit: type=1101 audit(1765560021.832:753): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.845055 kernel: audit: type=1103 audit(1765560021.835:754): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.847091 kernel: audit: type=1006 audit(1765560021.835:755): pid=5010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 17:20:21.847279 kernel: audit: type=1300 audit(1765560021.835:755): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd5de9b0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:21.835000 audit[5010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd5de9b0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:21.849921 systemd-logind[1564]: New session 10 of user core. Dec 12 17:20:21.835000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:21.852319 kernel: audit: type=1327 audit(1765560021.835:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:21.859764 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:20:21.861000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.863000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.869677 kernel: audit: type=1105 audit(1765560021.861:756): pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.869769 kernel: audit: type=1103 audit(1765560021.863:757): pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:21.940486 containerd[1581]: time="2025-12-12T17:20:21.940435719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:21.946578 containerd[1581]: time="2025-12-12T17:20:21.946495721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:20:21.946912 containerd[1581]: time="2025-12-12T17:20:21.946659441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:21.946947 kubelet[2754]: E1212 17:20:21.946875 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:20:21.946947 kubelet[2754]: E1212 17:20:21.946928 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:20:21.947334 kubelet[2754]: E1212 17:20:21.947048 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kglt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7847574df-8wxnl_calico-system(62dd324a-6db6-477a-9870-2e631369a8d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:21.948440 kubelet[2754]: E1212 17:20:21.948288 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1" Dec 12 17:20:22.000999 sshd[5013]: Connection closed by 10.0.0.1 port 43650 Dec 12 17:20:22.002491 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:22.003000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.003000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.012184 kernel: audit: type=1106 audit(1765560022.003:758): pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.012270 kernel: audit: type=1104 audit(1765560022.003:759): pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.013190 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:43650.service: Deactivated successfully. Dec 12 17:20:22.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.23:22-10.0.0.1:43650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:22.015637 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:20:22.017868 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:20:22.020723 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:43654.service - OpenSSH per-connection server daemon (10.0.0.1:43654). Dec 12 17:20:22.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.23:22-10.0.0.1:43654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:22.023912 systemd-logind[1564]: Removed session 10. Dec 12 17:20:22.090000 audit[5028]: USER_ACCT pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.091305 sshd[5028]: Accepted publickey for core from 10.0.0.1 port 43654 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:22.091000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.091000 audit[5028]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff11056c0 a2=3 a3=0 items=0 ppid=1 pid=5028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:22.091000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:22.092471 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:22.098947 systemd-logind[1564]: New session 11 of user core. Dec 12 17:20:22.107740 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:20:22.109000 audit[5028]: USER_START pid=5028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.110000 audit[5031]: CRED_ACQ pid=5031 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.267022 sshd[5031]: Connection closed by 10.0.0.1 port 43654 Dec 12 17:20:22.267402 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:22.270000 audit[5028]: USER_END pid=5028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.270000 audit[5028]: CRED_DISP pid=5028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.280188 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:43654.service: Deactivated successfully. Dec 12 17:20:22.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.23:22-10.0.0.1:43654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:22.283306 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:20:22.285471 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:20:22.290211 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:43658.service - OpenSSH per-connection server daemon (10.0.0.1:43658). Dec 12 17:20:22.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.23:22-10.0.0.1:43658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:22.291772 systemd-logind[1564]: Removed session 11. Dec 12 17:20:22.357000 audit[5043]: USER_ACCT pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.358244 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 43658 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:22.359000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.359000 audit[5043]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc15b0570 a2=3 a3=0 items=0 ppid=1 pid=5043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:22.359000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:22.360213 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:22.364905 systemd-logind[1564]: New session 12 of user core. Dec 12 17:20:22.372763 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:20:22.374000 audit[5043]: USER_START pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.376000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.507346 sshd[5046]: Connection closed by 10.0.0.1 port 43658 Dec 12 17:20:22.508069 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:22.508000 audit[5043]: USER_END pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.509000 audit[5043]: CRED_DISP pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:22.512623 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:43658.service: Deactivated successfully. Dec 12 17:20:22.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.23:22-10.0.0.1:43658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:22.514581 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:20:22.515351 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:20:22.516349 systemd-logind[1564]: Removed session 12. Dec 12 17:20:26.511811 containerd[1581]: time="2025-12-12T17:20:26.511720108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:20:26.719934 containerd[1581]: time="2025-12-12T17:20:26.719888921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:26.720875 containerd[1581]: time="2025-12-12T17:20:26.720839722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:20:26.721030 containerd[1581]: time="2025-12-12T17:20:26.720852522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:26.721212 kubelet[2754]: E1212 17:20:26.721027 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:20:26.721212 kubelet[2754]: E1212 17:20:26.721142 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:20:26.721979 kubelet[2754]: E1212 17:20:26.721285 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfzgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-578b47d77d-m8dkw_calico-system(2ac1174a-7255-43cd-9145-6ba385a2a343): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:26.722478 kubelet[2754]: E1212 17:20:26.722446 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:27.512209 containerd[1581]: time="2025-12-12T17:20:27.511687395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:20:27.526520 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:20:27.526617 kernel: audit: type=1130 audit(1765560027.523:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.23:22-10.0.0.1:43664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.23:22-10.0.0.1:43664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.523425 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:43664.service - OpenSSH per-connection server daemon (10.0.0.1:43664). Dec 12 17:20:27.589000 audit[5071]: USER_ACCT pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.590698 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 43664 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:27.593531 kernel: audit: type=1101 audit(1765560027.589:780): pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.593000 audit[5071]: CRED_ACQ pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.596787 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:27.598576 kernel: audit: type=1103 audit(1765560027.593:781): pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.598640 kernel: audit: type=1006 audit(1765560027.596:782): pid=5071 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:20:27.596000 audit[5071]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbbfe420 a2=3 a3=0 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:27.601834 kernel: audit: type=1300 audit(1765560027.596:782): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbbfe420 a2=3 a3=0 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:27.602026 kernel: audit: type=1327 audit(1765560027.596:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:27.596000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:27.606954 systemd-logind[1564]: New session 13 of user core. Dec 12 17:20:27.612703 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:20:27.614000 audit[5071]: USER_START pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.616000 audit[5074]: CRED_ACQ pid=5074 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.621523 kernel: audit: type=1105 audit(1765560027.614:783): pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.621576 kernel: audit: type=1103 audit(1765560027.616:784): pid=5074 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.697463 containerd[1581]: time="2025-12-12T17:20:27.696177759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:27.699546 containerd[1581]: time="2025-12-12T17:20:27.699113519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:20:27.699546 containerd[1581]: time="2025-12-12T17:20:27.699215079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:27.699660 kubelet[2754]: E1212 17:20:27.699424 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:20:27.699660 kubelet[2754]: E1212 17:20:27.699472 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:20:27.699660 kubelet[2754]: E1212 17:20:27.699622 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc9tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nj7gb_calico-system(f493074d-c6eb-434c-b64e-346bcf34db0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:27.701260 kubelet[2754]: E1212 17:20:27.701212 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:27.717251 sshd[5074]: Connection closed by 10.0.0.1 port 43664 Dec 12 17:20:27.719263 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:27.720000 audit[5071]: USER_END pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.720000 audit[5071]: CRED_DISP pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.723739 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:43664.service: Deactivated successfully. Dec 12 17:20:27.726762 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:20:27.728824 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:20:27.729722 systemd-logind[1564]: Removed session 13. Dec 12 17:20:27.729966 kernel: audit: type=1106 audit(1765560027.720:785): pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.730286 kernel: audit: type=1104 audit(1765560027.720:786): pid=5071 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:27.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.23:22-10.0.0.1:43664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:29.512055 containerd[1581]: time="2025-12-12T17:20:29.511958682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:20:29.710807 containerd[1581]: time="2025-12-12T17:20:29.710745763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:29.712712 containerd[1581]: time="2025-12-12T17:20:29.712164324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:20:29.712712 containerd[1581]: time="2025-12-12T17:20:29.712231764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:29.713353 kubelet[2754]: E1212 17:20:29.713307 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:29.713670 kubelet[2754]: E1212 17:20:29.713363 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:29.713670 kubelet[2754]: E1212 17:20:29.713526 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84646c749c-5xrgw_calico-apiserver(f35c0998-e01c-46ee-bdc1-a591da003d92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:29.714809 kubelet[2754]: E1212 17:20:29.714738 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:30.511384 containerd[1581]: time="2025-12-12T17:20:30.511344325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:20:30.685415 containerd[1581]: time="2025-12-12T17:20:30.685330439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:30.686693 containerd[1581]: time="2025-12-12T17:20:30.686624759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:20:30.686779 containerd[1581]: time="2025-12-12T17:20:30.686744479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:30.686968 kubelet[2754]: E1212 17:20:30.686933 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:30.687016 kubelet[2754]: E1212 17:20:30.686981 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:30.687528 kubelet[2754]: E1212 17:20:30.687454 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv4jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84646c749c-wxdfq_calico-apiserver(0e2d55e4-7343-4bc1-8a02-a707014e8ced): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:30.688655 kubelet[2754]: E1212 17:20:30.688622 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:32.512724 containerd[1581]: time="2025-12-12T17:20:32.512654173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:20:32.726614 containerd[1581]: time="2025-12-12T17:20:32.726534290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:32.727843 containerd[1581]: time="2025-12-12T17:20:32.727799530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:20:32.727935 containerd[1581]: time="2025-12-12T17:20:32.727872610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:32.728054 kubelet[2754]: E1212 17:20:32.728000 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:20:32.729186 kubelet[2754]: E1212 17:20:32.728067 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:20:32.729938 kubelet[2754]: E1212 17:20:32.729892 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:32.737942 containerd[1581]: time="2025-12-12T17:20:32.737903812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:20:32.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.23:22-10.0.0.1:52970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:32.740014 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:52970.service - OpenSSH per-connection server daemon (10.0.0.1:52970). Dec 12 17:20:32.744105 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:20:32.744293 kernel: audit: type=1130 audit(1765560032.739:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.23:22-10.0.0.1:52970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:32.795000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.796794 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 52970 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:32.799000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.800259 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:32.802525 kernel: audit: type=1101 audit(1765560032.795:789): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.802566 kernel: audit: type=1103 audit(1765560032.799:790): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.804421 kernel: audit: type=1006 audit(1765560032.799:791): pid=5091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:20:32.804605 kernel: audit: type=1300 audit(1765560032.799:791): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc478e610 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:32.799000 audit[5091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc478e610 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:32.799000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:32.809618 kernel: audit: type=1327 audit(1765560032.799:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:32.811811 systemd-logind[1564]: New session 14 of user core. Dec 12 17:20:32.821757 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:20:32.823000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.827531 kernel: audit: type=1105 audit(1765560032.823:792): pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.827000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.831555 kernel: audit: type=1103 audit(1765560032.827:793): pid=5094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.920365 sshd[5094]: Connection closed by 10.0.0.1 port 52970 Dec 12 17:20:32.920280 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:32.922000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.922000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.926853 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:52970.service: Deactivated successfully. Dec 12 17:20:32.928623 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:20:32.931168 kernel: audit: type=1106 audit(1765560032.922:794): pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.931245 kernel: audit: type=1104 audit(1765560032.922:795): pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:32.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.23:22-10.0.0.1:52970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:32.931822 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:20:32.932738 systemd-logind[1564]: Removed session 14. Dec 12 17:20:32.935592 containerd[1581]: time="2025-12-12T17:20:32.935556806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:32.936633 containerd[1581]: time="2025-12-12T17:20:32.936597326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:20:32.936714 containerd[1581]: time="2025-12-12T17:20:32.936659886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:32.936858 kubelet[2754]: E1212 17:20:32.936810 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:20:32.936910 kubelet[2754]: E1212 17:20:32.936874 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:20:32.937033 kubelet[2754]: E1212 17:20:32.936987 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:32.938216 kubelet[2754]: E1212 17:20:32.938149 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:36.511659 kubelet[2754]: E1212 17:20:36.511545 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1" Dec 12 17:20:37.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.23:22-10.0.0.1:52982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:37.935963 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Dec 12 17:20:37.937539 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:20:37.937631 kernel: audit: type=1130 audit(1765560037.935:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.23:22-10.0.0.1:52982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:38.033000 audit[5117]: USER_ACCT pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.033769 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:38.036000 audit[5117]: CRED_ACQ pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.037954 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:38.040477 kernel: audit: type=1101 audit(1765560038.033:798): pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.040561 kernel: audit: type=1103 audit(1765560038.036:799): pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.042844 kernel: audit: type=1006 audit(1765560038.036:800): pid=5117 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:20:38.036000 audit[5117]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed545680 a2=3 a3=0 items=0 ppid=1 pid=5117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:38.046057 systemd-logind[1564]: New session 15 of user core. Dec 12 17:20:38.046993 kernel: audit: type=1300 audit(1765560038.036:800): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed545680 a2=3 a3=0 items=0 ppid=1 pid=5117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:38.047101 kernel: audit: type=1327 audit(1765560038.036:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:38.036000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:38.053884 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:20:38.055000 audit[5117]: USER_START pid=5117 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.060537 kernel: audit: type=1105 audit(1765560038.055:801): pid=5117 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.060000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.064588 kernel: audit: type=1103 audit(1765560038.060:802): pid=5122 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.226163 sshd[5122]: Connection closed by 10.0.0.1 port 52982 Dec 12 17:20:38.226697 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:38.228000 audit[5117]: USER_END pid=5117 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.231648 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:20:38.233444 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:52982.service: Deactivated successfully. Dec 12 17:20:38.228000 audit[5117]: CRED_DISP pid=5117 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.238090 kernel: audit: type=1106 audit(1765560038.228:803): pid=5117 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.238174 kernel: audit: type=1104 audit(1765560038.228:804): pid=5117 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:38.237797 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:20:38.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.23:22-10.0.0.1:52982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:38.240766 systemd-logind[1564]: Removed session 15. Dec 12 17:20:38.511735 kubelet[2754]: E1212 17:20:38.511594 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:40.512075 kubelet[2754]: E1212 17:20:40.511912 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:42.511975 kubelet[2754]: E1212 17:20:42.511878 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:43.248560 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:20:43.248682 kernel: audit: type=1130 audit(1765560043.244:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.23:22-10.0.0.1:48318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:43.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.23:22-10.0.0.1:48318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:43.244664 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:48318.service - OpenSSH per-connection server daemon (10.0.0.1:48318). Dec 12 17:20:43.302976 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 48318 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:43.302000 audit[5135]: USER_ACCT pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.307535 kernel: audit: type=1101 audit(1765560043.302:807): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.307623 kernel: audit: type=1103 audit(1765560043.306:808): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.306000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.308123 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:43.312540 kernel: audit: type=1006 audit(1765560043.307:809): pid=5135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:20:43.312731 kernel: audit: type=1300 audit(1765560043.307:809): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef5cc9a0 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:43.307000 audit[5135]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef5cc9a0 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:43.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:43.317967 kernel: audit: type=1327 audit(1765560043.307:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:43.322176 systemd-logind[1564]: New session 16 of user core. Dec 12 17:20:43.331800 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:20:43.336000 audit[5135]: USER_START pid=5135 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.338000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.345448 kernel: audit: type=1105 audit(1765560043.336:810): pid=5135 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.345632 kernel: audit: type=1103 audit(1765560043.338:811): pid=5138 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.492288 sshd[5138]: Connection closed by 10.0.0.1 port 48318 Dec 12 17:20:43.492970 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:43.494000 audit[5135]: USER_END pid=5135 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.495000 audit[5135]: CRED_DISP pid=5135 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.502544 kernel: audit: type=1106 audit(1765560043.494:812): pid=5135 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.502715 kernel: audit: type=1104 audit(1765560043.495:813): pid=5135 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.507368 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:48318.service: Deactivated successfully. Dec 12 17:20:43.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.23:22-10.0.0.1:48318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:43.511122 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:20:43.512605 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:20:43.516481 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:48324.service - OpenSSH per-connection server daemon (10.0.0.1:48324). Dec 12 17:20:43.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.23:22-10.0.0.1:48324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:43.517850 systemd-logind[1564]: Removed session 16. Dec 12 17:20:43.583677 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 48324 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:43.583000 audit[5151]: USER_ACCT pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.584000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.584000 audit[5151]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcfeef9e0 a2=3 a3=0 items=0 ppid=1 pid=5151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:43.584000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:43.585513 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:43.591689 systemd-logind[1564]: New session 17 of user core. Dec 12 17:20:43.600759 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:20:43.602000 audit[5151]: USER_START pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.604000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.779071 sshd[5154]: Connection closed by 10.0.0.1 port 48324 Dec 12 17:20:43.779974 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:43.781000 audit[5151]: USER_END pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.782000 audit[5151]: CRED_DISP pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.793900 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:48324.service: Deactivated successfully. Dec 12 17:20:43.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.23:22-10.0.0.1:48324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:43.796133 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:20:43.797141 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:20:43.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.23:22-10.0.0.1:48334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:43.800404 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:48334.service - OpenSSH per-connection server daemon (10.0.0.1:48334). Dec 12 17:20:43.801437 systemd-logind[1564]: Removed session 17. Dec 12 17:20:43.874000 audit[5166]: USER_ACCT pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.874855 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 48334 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:43.876000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.876000 audit[5166]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2523b50 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:43.876000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:43.877308 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:43.882371 systemd-logind[1564]: New session 18 of user core. Dec 12 17:20:43.888773 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:20:43.892000 audit[5166]: USER_START pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:43.895000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:44.513572 kubelet[2754]: E1212 17:20:44.513485 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:20:44.604000 audit[5185]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:44.604000 audit[5185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcd5716e0 a2=0 a3=1 items=0 ppid=2913 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:44.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:44.617000 audit[5185]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:44.617000 audit[5185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcd5716e0 a2=0 a3=1 items=0 ppid=2913 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:44.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:44.621814 sshd[5170]: Connection closed by 10.0.0.1 port 48334 Dec 12 17:20:44.622366 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:44.625000 audit[5166]: USER_END pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:44.625000 audit[5166]: CRED_DISP pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:44.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.23:22-10.0.0.1:48334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:44.635919 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:48334.service: Deactivated successfully. Dec 12 17:20:44.640001 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:20:44.644812 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:20:44.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.23:22-10.0.0.1:48344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:44.652218 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:48344.service - OpenSSH per-connection server daemon (10.0.0.1:48344). Dec 12 17:20:44.653928 systemd-logind[1564]: Removed session 18. Dec 12 17:20:44.724000 audit[5194]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=5194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:44.724000 audit[5194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffef5cf710 a2=0 a3=1 items=0 ppid=2913 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:44.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:44.733000 audit[5194]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:44.733000 audit[5194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffef5cf710 a2=0 a3=1 items=0 ppid=2913 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:44.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:44.749000 audit[5191]: USER_ACCT pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:44.750017 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 48344 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:44.751000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:44.751000 audit[5191]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe46ad240 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:44.751000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:44.752710 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:44.761784 systemd-logind[1564]: New session 19 of user core. Dec 12 17:20:44.772780 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:20:44.775000 audit[5191]: USER_START pid=5191 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:44.777000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.106256 sshd[5195]: Connection closed by 10.0.0.1 port 48344 Dec 12 17:20:45.106858 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:45.109000 audit[5191]: USER_END pid=5191 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.109000 audit[5191]: CRED_DISP pid=5191 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.117915 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:48344.service: Deactivated successfully. Dec 12 17:20:45.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.23:22-10.0.0.1:48344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:45.122183 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:20:45.125010 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:20:45.131604 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:48352.service - OpenSSH per-connection server daemon (10.0.0.1:48352). Dec 12 17:20:45.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.23:22-10.0.0.1:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:45.133731 systemd-logind[1564]: Removed session 19. Dec 12 17:20:45.203000 audit[5231]: USER_ACCT pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.204176 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 48352 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:45.204000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.204000 audit[5231]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb0094b0 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:45.204000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:45.205844 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:45.214204 systemd-logind[1564]: New session 20 of user core. Dec 12 17:20:45.232922 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:20:45.237000 audit[5231]: USER_START pid=5231 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.239000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.338542 sshd[5234]: Connection closed by 10.0.0.1 port 48352 Dec 12 17:20:45.338733 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:45.340000 audit[5231]: USER_END pid=5231 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.340000 audit[5231]: CRED_DISP pid=5231 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:45.344373 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:20:45.344606 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:48352.service: Deactivated successfully. Dec 12 17:20:45.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.23:22-10.0.0.1:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:45.346885 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:20:45.349475 systemd-logind[1564]: Removed session 20. Dec 12 17:20:45.511477 kubelet[2754]: E1212 17:20:45.511340 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:48.513097 containerd[1581]: time="2025-12-12T17:20:48.512932104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:20:48.789543 containerd[1581]: time="2025-12-12T17:20:48.789347246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:48.790985 containerd[1581]: time="2025-12-12T17:20:48.790836658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:20:48.790985 containerd[1581]: time="2025-12-12T17:20:48.790929419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:48.791419 kubelet[2754]: E1212 17:20:48.791247 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:20:48.791419 kubelet[2754]: E1212 17:20:48.791331 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:20:48.791864 kubelet[2754]: E1212 17:20:48.791596 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d83762e8f24b14ac1baa2822839bf5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kglt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7847574df-8wxnl_calico-system(62dd324a-6db6-477a-9870-2e631369a8d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:48.796197 containerd[1581]: time="2025-12-12T17:20:48.796159938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:20:49.010692 containerd[1581]: time="2025-12-12T17:20:49.010490527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:49.012553 containerd[1581]: time="2025-12-12T17:20:49.012468541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:20:49.013276 containerd[1581]: time="2025-12-12T17:20:49.012625263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:49.013395 kubelet[2754]: E1212 17:20:49.012907 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:20:49.013395 kubelet[2754]: E1212 17:20:49.012977 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:20:49.013395 kubelet[2754]: E1212 17:20:49.013134 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kglt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7847574df-8wxnl_calico-system(62dd324a-6db6-477a-9870-2e631369a8d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:49.014533 kubelet[2754]: E1212 17:20:49.014343 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1" Dec 12 17:20:49.553000 audit[5250]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:49.554906 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:20:49.554986 kernel: audit: type=1325 audit(1765560049.553:855): table=filter:152 family=2 entries=26 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:49.553000 audit[5250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc1b17b0 a2=0 a3=1 items=0 ppid=2913 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:49.561385 kernel: audit: type=1300 audit(1765560049.553:855): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc1b17b0 a2=0 a3=1 items=0 ppid=2913 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:49.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:49.564231 kernel: audit: type=1327 audit(1765560049.553:855): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:49.565000 audit[5250]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:49.565000 audit[5250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffdc1b17b0 a2=0 a3=1 items=0 ppid=2913 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:49.574145 kernel: audit: type=1325 audit(1765560049.565:856): table=nat:153 family=2 entries=104 op=nft_register_chain pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:20:49.574248 kernel: audit: type=1300 audit(1765560049.565:856): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffdc1b17b0 a2=0 a3=1 items=0 ppid=2913 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:49.574268 kernel: audit: type=1327 audit(1765560049.565:856): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:49.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:20:50.354273 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:48362.service - OpenSSH per-connection server daemon (10.0.0.1:48362). Dec 12 17:20:50.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.23:22-10.0.0.1:48362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:50.358565 kernel: audit: type=1130 audit(1765560050.353:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.23:22-10.0.0.1:48362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:50.425102 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 48362 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:50.424000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.430066 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:50.429000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.434266 kernel: audit: type=1101 audit(1765560050.424:858): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.434353 kernel: audit: type=1103 audit(1765560050.429:859): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.438479 kernel: audit: type=1006 audit(1765560050.429:860): pid=5252 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:20:50.429000 audit[5252]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea0dab10 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:50.429000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:50.441484 systemd-logind[1564]: New session 21 of user core. Dec 12 17:20:50.452782 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:20:50.458000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.460000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.553110 sshd[5255]: Connection closed by 10.0.0.1 port 48362 Dec 12 17:20:50.553491 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:50.555000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.556000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:50.559660 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:48362.service: Deactivated successfully. Dec 12 17:20:50.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.23:22-10.0.0.1:48362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:50.561800 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:20:50.562755 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:20:50.564500 systemd-logind[1564]: Removed session 21. Dec 12 17:20:52.512182 containerd[1581]: time="2025-12-12T17:20:52.512110938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:20:52.724779 containerd[1581]: time="2025-12-12T17:20:52.724723231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:52.725837 containerd[1581]: time="2025-12-12T17:20:52.725769558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:20:52.725906 containerd[1581]: time="2025-12-12T17:20:52.725842438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:52.727150 kubelet[2754]: E1212 17:20:52.727096 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:20:52.727460 kubelet[2754]: E1212 17:20:52.727169 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:20:52.727460 kubelet[2754]: E1212 17:20:52.727308 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfzgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-578b47d77d-m8dkw_calico-system(2ac1174a-7255-43cd-9145-6ba385a2a343): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:52.729069 kubelet[2754]: E1212 17:20:52.728482 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-578b47d77d-m8dkw" podUID="2ac1174a-7255-43cd-9145-6ba385a2a343" Dec 12 17:20:53.513931 kubelet[2754]: E1212 17:20:53.513884 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:54.513532 containerd[1581]: time="2025-12-12T17:20:54.511190999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:20:54.698691 containerd[1581]: time="2025-12-12T17:20:54.698418092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:54.699624 containerd[1581]: time="2025-12-12T17:20:54.699534619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:54.699624 containerd[1581]: time="2025-12-12T17:20:54.699501859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:20:54.699860 kubelet[2754]: E1212 17:20:54.699807 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:20:54.700227 kubelet[2754]: E1212 17:20:54.699861 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:20:54.700227 kubelet[2754]: E1212 17:20:54.699998 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc9tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nj7gb_calico-system(f493074d-c6eb-434c-b64e-346bcf34db0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:54.701210 kubelet[2754]: E1212 17:20:54.701153 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nj7gb" podUID="f493074d-c6eb-434c-b64e-346bcf34db0d" Dec 12 17:20:55.513962 kubelet[2754]: E1212 17:20:55.513935 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:20:55.567792 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Dec 12 17:20:55.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.23:22-10.0.0.1:52556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:55.568820 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:20:55.569383 kernel: audit: type=1130 audit(1765560055.567:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.23:22-10.0.0.1:52556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:55.644000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.648011 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:55.648908 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:20:55.647000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.653703 kernel: audit: type=1101 audit(1765560055.644:867): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.653795 kernel: audit: type=1103 audit(1765560055.647:868): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.657002 kernel: audit: type=1006 audit(1765560055.647:869): pid=5276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:20:55.655742 systemd-logind[1564]: New session 22 of user core. Dec 12 17:20:55.647000 audit[5276]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffa15e30 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:55.661281 kernel: audit: type=1300 audit(1765560055.647:869): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffa15e30 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:55.661381 kernel: audit: type=1327 audit(1765560055.647:869): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:55.647000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:20:55.663814 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:20:55.666000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.671653 kernel: audit: type=1105 audit(1765560055.666:870): pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.671000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.675532 kernel: audit: type=1103 audit(1765560055.671:871): pid=5279 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.777817 sshd[5279]: Connection closed by 10.0.0.1 port 52556 Dec 12 17:20:55.778192 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:55.779000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.783553 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:52556.service: Deactivated successfully. Dec 12 17:20:55.779000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.787765 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:20:55.788894 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:20:55.790540 kernel: audit: type=1106 audit(1765560055.779:872): pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.790599 kernel: audit: type=1104 audit(1765560055.779:873): pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:20:55.791222 systemd-logind[1564]: Removed session 22. Dec 12 17:20:55.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.23:22-10.0.0.1:52556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:57.516264 containerd[1581]: time="2025-12-12T17:20:57.515353120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:20:57.712554 containerd[1581]: time="2025-12-12T17:20:57.712480541Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:57.714070 containerd[1581]: time="2025-12-12T17:20:57.714016351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:20:57.714147 containerd[1581]: time="2025-12-12T17:20:57.714098311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:57.714275 kubelet[2754]: E1212 17:20:57.714230 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:57.715569 kubelet[2754]: E1212 17:20:57.714284 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:57.715569 kubelet[2754]: E1212 17:20:57.714530 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84646c749c-5xrgw_calico-apiserver(f35c0998-e01c-46ee-bdc1-a591da003d92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:57.715703 containerd[1581]: time="2025-12-12T17:20:57.714646114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:20:57.715838 kubelet[2754]: E1212 17:20:57.715724 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-5xrgw" podUID="f35c0998-e01c-46ee-bdc1-a591da003d92" Dec 12 17:20:57.889942 containerd[1581]: time="2025-12-12T17:20:57.889884525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:57.891034 containerd[1581]: time="2025-12-12T17:20:57.890984651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:20:57.891101 containerd[1581]: time="2025-12-12T17:20:57.891032612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:57.891459 kubelet[2754]: E1212 17:20:57.891219 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:57.891459 kubelet[2754]: E1212 17:20:57.891276 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:20:57.891459 kubelet[2754]: E1212 17:20:57.891408 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv4jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84646c749c-wxdfq_calico-apiserver(0e2d55e4-7343-4bc1-8a02-a707014e8ced): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:57.892620 kubelet[2754]: E1212 17:20:57.892558 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84646c749c-wxdfq" podUID="0e2d55e4-7343-4bc1-8a02-a707014e8ced" Dec 12 17:20:58.513540 containerd[1581]: time="2025-12-12T17:20:58.511976535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:20:58.716639 containerd[1581]: time="2025-12-12T17:20:58.716591850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:58.729644 containerd[1581]: time="2025-12-12T17:20:58.729586206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:20:58.729783 containerd[1581]: time="2025-12-12T17:20:58.729638767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:58.729820 kubelet[2754]: E1212 17:20:58.729781 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:20:58.730077 kubelet[2754]: E1212 17:20:58.729817 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:20:58.730077 kubelet[2754]: E1212 17:20:58.729958 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:58.732628 containerd[1581]: time="2025-12-12T17:20:58.732594824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:20:58.976858 containerd[1581]: time="2025-12-12T17:20:58.976680130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:20:58.978491 containerd[1581]: time="2025-12-12T17:20:58.978382300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:20:58.978627 containerd[1581]: time="2025-12-12T17:20:58.978460860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:20:58.978695 kubelet[2754]: E1212 17:20:58.978654 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:20:58.978739 kubelet[2754]: E1212 17:20:58.978708 2754 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:20:58.978935 kubelet[2754]: E1212 17:20:58.978859 2754 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p27qt_calico-system(9c455fe7-f7d2-456e-ac64-f3619ba04a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:20:58.980459 kubelet[2754]: E1212 17:20:58.980375 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p27qt" podUID="9c455fe7-f7d2-456e-ac64-f3619ba04a75" Dec 12 17:21:00.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.23:22-10.0.0.1:52568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.792418 systemd[1]: Started sshd@22-10.0.0.23:22-10.0.0.1:52568.service - OpenSSH per-connection server daemon (10.0.0.1:52568). Dec 12 17:21:00.796125 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:21:00.796184 kernel: audit: type=1130 audit(1765560060.792:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.23:22-10.0.0.1:52568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.851000 audit[5295]: USER_ACCT pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.854823 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 52568 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:00.855546 kernel: audit: type=1101 audit(1765560060.851:876): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.859177 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:00.858000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.865494 kernel: audit: type=1103 audit(1765560060.858:877): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.865664 kernel: audit: type=1006 audit(1765560060.858:878): pid=5295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:21:00.858000 audit[5295]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0e14c10 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:00.868924 kernel: audit: type=1300 audit(1765560060.858:878): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0e14c10 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:00.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:21:00.869757 systemd-logind[1564]: New session 23 of user core. Dec 12 17:21:00.870629 kernel: audit: type=1327 audit(1765560060.858:878): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:21:00.876852 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:21:00.878000 audit[5295]: USER_START pid=5295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.883528 kernel: audit: type=1105 audit(1765560060.878:879): pid=5295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.882000 audit[5298]: CRED_ACQ pid=5298 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.887526 kernel: audit: type=1103 audit(1765560060.882:880): pid=5298 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.968636 sshd[5298]: Connection closed by 10.0.0.1 port 52568 Dec 12 17:21:00.969236 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:00.970000 audit[5295]: USER_END pid=5295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.974637 systemd[1]: sshd@22-10.0.0.23:22-10.0.0.1:52568.service: Deactivated successfully. Dec 12 17:21:00.970000 audit[5295]: CRED_DISP pid=5295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.976650 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:21:00.977736 kernel: audit: type=1106 audit(1765560060.970:881): pid=5295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.977779 kernel: audit: type=1104 audit(1765560060.970:882): pid=5295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:00.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.23:22-10.0.0.1:52568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.978779 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:21:00.979580 systemd-logind[1564]: Removed session 23. Dec 12 17:21:02.511440 kubelet[2754]: E1212 17:21:02.511353 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7847574df-8wxnl" podUID="62dd324a-6db6-477a-9870-2e631369a8d1"