Dec 12 22:50:53.341418 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 22:50:53.341440 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 20:51:06 -00 2025 Dec 12 22:50:53.341450 kernel: KASLR enabled Dec 12 22:50:53.341456 kernel: efi: EFI v2.7 by EDK II Dec 12 22:50:53.341462 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 22:50:53.341468 kernel: random: crng init done Dec 12 22:50:53.341476 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 22:50:53.341482 kernel: secureboot: Secure boot enabled Dec 12 22:50:53.341489 kernel: ACPI: Early table checksum verification disabled Dec 12 22:50:53.341495 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 22:50:53.341501 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 22:50:53.341507 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341514 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341535 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341548 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341557 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341564 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341571 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341578 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341584 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 22:50:53.341590 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 22:50:53.341597 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 22:50:53.341605 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 22:50:53.341612 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 22:50:53.341618 kernel: Zone ranges: Dec 12 22:50:53.341624 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 22:50:53.341630 kernel: DMA32 empty Dec 12 22:50:53.341637 kernel: Normal empty Dec 12 22:50:53.341643 kernel: Device empty Dec 12 22:50:53.341649 kernel: Movable zone start for each node Dec 12 22:50:53.341655 kernel: Early memory node ranges Dec 12 22:50:53.341662 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 22:50:53.341668 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 22:50:53.341675 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 22:50:53.341683 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 22:50:53.341698 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 22:50:53.341705 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 22:50:53.341711 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 22:50:53.341717 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 22:50:53.341724 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 22:50:53.341734 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 22:50:53.341741 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 22:50:53.341748 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 22:50:53.341755 kernel: psci: probing for conduit method from ACPI. Dec 12 22:50:53.341762 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 22:50:53.341769 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 22:50:53.341775 kernel: psci: Trusted OS migration not required Dec 12 22:50:53.341783 kernel: psci: SMC Calling Convention v1.1 Dec 12 22:50:53.341792 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 22:50:53.341798 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 22:50:53.341805 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 22:50:53.341813 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 22:50:53.341820 kernel: Detected PIPT I-cache on CPU0 Dec 12 22:50:53.341827 kernel: CPU features: detected: GIC system register CPU interface Dec 12 22:50:53.341833 kernel: CPU features: detected: Spectre-v4 Dec 12 22:50:53.341840 kernel: CPU features: detected: Spectre-BHB Dec 12 22:50:53.341847 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 22:50:53.341854 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 22:50:53.341861 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 22:50:53.341869 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 22:50:53.341875 kernel: alternatives: applying boot alternatives Dec 12 22:50:53.341884 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=319ee2439da9a842de2c25b49d70f7b5c5214dd0f4053f60c327b072c645cb1a Dec 12 22:50:53.341891 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 22:50:53.341898 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 22:50:53.341904 kernel: Fallback order for Node 0: 0 Dec 12 22:50:53.341911 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 22:50:53.341918 kernel: Policy zone: DMA Dec 12 22:50:53.341925 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 22:50:53.341931 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 22:50:53.341939 kernel: software IO TLB: area num 4. Dec 12 22:50:53.341946 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 22:50:53.341953 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 22:50:53.341959 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 22:50:53.341966 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 22:50:53.341973 kernel: rcu: RCU event tracing is enabled. Dec 12 22:50:53.341980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 22:50:53.341987 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 22:50:53.341994 kernel: Tracing variant of Tasks RCU enabled. Dec 12 22:50:53.342001 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 22:50:53.342008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 22:50:53.342015 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 22:50:53.342024 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 22:50:53.342031 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 22:50:53.342038 kernel: GICv3: 256 SPIs implemented Dec 12 22:50:53.342045 kernel: GICv3: 0 Extended SPIs implemented Dec 12 22:50:53.342052 kernel: Root IRQ handler: gic_handle_irq Dec 12 22:50:53.342059 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 22:50:53.342066 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 22:50:53.342073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 22:50:53.342080 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 22:50:53.342087 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 22:50:53.342094 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 22:50:53.342103 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 22:50:53.342110 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 22:50:53.342117 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 22:50:53.342124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 22:50:53.342131 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 22:50:53.342139 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 22:50:53.342146 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 22:50:53.342153 kernel: arm-pv: using stolen time PV Dec 12 22:50:53.342160 kernel: Console: colour dummy device 80x25 Dec 12 22:50:53.342169 kernel: ACPI: Core revision 20240827 Dec 12 22:50:53.342177 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 22:50:53.342184 kernel: pid_max: default: 32768 minimum: 301 Dec 12 22:50:53.342192 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 22:50:53.342199 kernel: landlock: Up and running. Dec 12 22:50:53.342206 kernel: SELinux: Initializing. Dec 12 22:50:53.342213 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 22:50:53.342221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 22:50:53.342229 kernel: rcu: Hierarchical SRCU implementation. Dec 12 22:50:53.342236 kernel: rcu: Max phase no-delay instances is 400. Dec 12 22:50:53.342244 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 22:50:53.342251 kernel: Remapping and enabling EFI services. Dec 12 22:50:53.342258 kernel: smp: Bringing up secondary CPUs ... Dec 12 22:50:53.342266 kernel: Detected PIPT I-cache on CPU1 Dec 12 22:50:53.342273 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 22:50:53.342281 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 22:50:53.342289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 22:50:53.342301 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 22:50:53.342308 kernel: Detected PIPT I-cache on CPU2 Dec 12 22:50:53.342316 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 22:50:53.342324 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 22:50:53.342332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 22:50:53.342339 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 22:50:53.342347 kernel: Detected PIPT I-cache on CPU3 Dec 12 22:50:53.342356 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 22:50:53.342364 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 22:50:53.342371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 22:50:53.342379 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 22:50:53.342387 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 22:50:53.342396 kernel: SMP: Total of 4 processors activated. Dec 12 22:50:53.342403 kernel: CPU: All CPU(s) started at EL1 Dec 12 22:50:53.342410 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 22:50:53.342418 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 22:50:53.342426 kernel: CPU features: detected: Common not Private translations Dec 12 22:50:53.342433 kernel: CPU features: detected: CRC32 instructions Dec 12 22:50:53.342441 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 22:50:53.342450 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 22:50:53.342458 kernel: CPU features: detected: LSE atomic instructions Dec 12 22:50:53.342465 kernel: CPU features: detected: Privileged Access Never Dec 12 22:50:53.342472 kernel: CPU features: detected: RAS Extension Support Dec 12 22:50:53.342480 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 22:50:53.342487 kernel: alternatives: applying system-wide alternatives Dec 12 22:50:53.342495 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 22:50:53.342502 kernel: Memory: 2448740K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 101212K reserved, 16384K cma-reserved) Dec 12 22:50:53.342512 kernel: devtmpfs: initialized Dec 12 22:50:53.342530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 22:50:53.342541 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 22:50:53.342549 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 22:50:53.342557 kernel: 0 pages in range for non-PLT usage Dec 12 22:50:53.342564 kernel: 515168 pages in range for PLT usage Dec 12 22:50:53.342571 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 22:50:53.342581 kernel: SMBIOS 3.0.0 present. Dec 12 22:50:53.342588 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 22:50:53.342596 kernel: DMI: Memory slots populated: 1/1 Dec 12 22:50:53.342604 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 22:50:53.342611 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 22:50:53.342619 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 22:50:53.342626 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 22:50:53.342635 kernel: audit: initializing netlink subsys (disabled) Dec 12 22:50:53.342643 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 12 22:50:53.342651 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 22:50:53.342658 kernel: cpuidle: using governor menu Dec 12 22:50:53.342666 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 22:50:53.342673 kernel: ASID allocator initialised with 32768 entries Dec 12 22:50:53.342681 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 22:50:53.342694 kernel: Serial: AMBA PL011 UART driver Dec 12 22:50:53.342702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 22:50:53.342710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 22:50:53.342718 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 22:50:53.342725 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 22:50:53.342733 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 22:50:53.342740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 22:50:53.342748 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 22:50:53.342758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 22:50:53.342766 kernel: ACPI: Added _OSI(Module Device) Dec 12 22:50:53.342773 kernel: ACPI: Added _OSI(Processor Device) Dec 12 22:50:53.342781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 22:50:53.342789 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 22:50:53.342797 kernel: ACPI: Interpreter enabled Dec 12 22:50:53.342805 kernel: ACPI: Using GIC for interrupt routing Dec 12 22:50:53.342813 kernel: ACPI: MCFG table detected, 1 entries Dec 12 22:50:53.342822 kernel: ACPI: CPU0 has been hot-added Dec 12 22:50:53.342829 kernel: ACPI: CPU1 has been hot-added Dec 12 22:50:53.342837 kernel: ACPI: CPU2 has been hot-added Dec 12 22:50:53.342844 kernel: ACPI: CPU3 has been hot-added Dec 12 22:50:53.342929 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 22:50:53.342939 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 22:50:53.342947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 22:50:53.343310 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 22:50:53.343508 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 22:50:53.343629 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 22:50:53.343722 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 22:50:53.343805 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 22:50:53.343821 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 22:50:53.343828 kernel: PCI host bridge to bus 0000:00 Dec 12 22:50:53.343915 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 22:50:53.344063 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 22:50:53.344148 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 22:50:53.344221 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 22:50:53.344329 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 22:50:53.344424 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 22:50:53.344569 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 22:50:53.344655 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 22:50:53.344747 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 22:50:53.344834 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 22:50:53.344926 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 22:50:53.345007 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 22:50:53.345082 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 22:50:53.345154 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 22:50:53.345229 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 22:50:53.345241 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 22:50:53.345249 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 22:50:53.345256 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 22:50:53.345264 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 22:50:53.345272 kernel: iommu: Default domain type: Translated Dec 12 22:50:53.345280 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 22:50:53.345287 kernel: efivars: Registered efivars operations Dec 12 22:50:53.345296 kernel: vgaarb: loaded Dec 12 22:50:53.345303 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 22:50:53.345311 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 22:50:53.345318 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 22:50:53.345326 kernel: pnp: PnP ACPI init Dec 12 22:50:53.345422 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 22:50:53.345433 kernel: pnp: PnP ACPI: found 1 devices Dec 12 22:50:53.345442 kernel: NET: Registered PF_INET protocol family Dec 12 22:50:53.345457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 22:50:53.345465 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 22:50:53.345472 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 22:50:53.345480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 22:50:53.345488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 22:50:53.345495 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 22:50:53.345505 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 22:50:53.345512 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 22:50:53.345534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 22:50:53.345542 kernel: PCI: CLS 0 bytes, default 64 Dec 12 22:50:53.345550 kernel: kvm [1]: HYP mode not available Dec 12 22:50:53.345557 kernel: Initialise system trusted keyrings Dec 12 22:50:53.345565 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 22:50:53.345574 kernel: Key type asymmetric registered Dec 12 22:50:53.345582 kernel: Asymmetric key parser 'x509' registered Dec 12 22:50:53.345590 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 22:50:53.345598 kernel: io scheduler mq-deadline registered Dec 12 22:50:53.345605 kernel: io scheduler kyber registered Dec 12 22:50:53.345613 kernel: io scheduler bfq registered Dec 12 22:50:53.345621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 22:50:53.345629 kernel: ACPI: button: Power Button [PWRB] Dec 12 22:50:53.345637 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 22:50:53.345731 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 22:50:53.345742 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 22:50:53.345750 kernel: thunder_xcv, ver 1.0 Dec 12 22:50:53.345757 kernel: thunder_bgx, ver 1.0 Dec 12 22:50:53.345765 kernel: nicpf, ver 1.0 Dec 12 22:50:53.345775 kernel: nicvf, ver 1.0 Dec 12 22:50:53.345883 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 22:50:53.345965 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T22:50:52 UTC (1765579852) Dec 12 22:50:53.345975 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 22:50:53.345983 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 22:50:53.345990 kernel: watchdog: NMI not fully supported Dec 12 22:50:53.345998 kernel: watchdog: Hard watchdog permanently disabled Dec 12 22:50:53.346007 kernel: NET: Registered PF_INET6 protocol family Dec 12 22:50:53.346015 kernel: Segment Routing with IPv6 Dec 12 22:50:53.346023 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 22:50:53.346030 kernel: NET: Registered PF_PACKET protocol family Dec 12 22:50:53.346038 kernel: Key type dns_resolver registered Dec 12 22:50:53.346045 kernel: registered taskstats version 1 Dec 12 22:50:53.346052 kernel: Loading compiled-in X.509 certificates Dec 12 22:50:53.346062 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 12083bc17742cc595b73e63bad6f3f8151e70077' Dec 12 22:50:53.346069 kernel: Demotion targets for Node 0: null Dec 12 22:50:53.346077 kernel: Key type .fscrypt registered Dec 12 22:50:53.346084 kernel: Key type fscrypt-provisioning registered Dec 12 22:50:53.346092 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 22:50:53.346100 kernel: ima: Allocated hash algorithm: sha1 Dec 12 22:50:53.346107 kernel: ima: No architecture policies found Dec 12 22:50:53.346116 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 22:50:53.346124 kernel: clk: Disabling unused clocks Dec 12 22:50:53.346131 kernel: PM: genpd: Disabling unused power domains Dec 12 22:50:53.346139 kernel: Freeing unused kernel memory: 12480K Dec 12 22:50:53.346147 kernel: Run /init as init process Dec 12 22:50:53.346154 kernel: with arguments: Dec 12 22:50:53.346162 kernel: /init Dec 12 22:50:53.346170 kernel: with environment: Dec 12 22:50:53.346178 kernel: HOME=/ Dec 12 22:50:53.346185 kernel: TERM=linux Dec 12 22:50:53.346278 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 22:50:53.346358 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 12 22:50:53.346368 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 22:50:53.346378 kernel: GPT:16515071 != 27000831 Dec 12 22:50:53.346386 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 22:50:53.346393 kernel: GPT:16515071 != 27000831 Dec 12 22:50:53.346401 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 22:50:53.346411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 22:50:53.346419 kernel: SCSI subsystem initialized Dec 12 22:50:53.346428 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 22:50:53.346437 kernel: device-mapper: uevent: version 1.0.3 Dec 12 22:50:53.346445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 22:50:53.346452 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 22:50:53.346460 kernel: raid6: neonx8 gen() 15773 MB/s Dec 12 22:50:53.346468 kernel: raid6: neonx4 gen() 15688 MB/s Dec 12 22:50:53.346475 kernel: raid6: neonx2 gen() 13190 MB/s Dec 12 22:50:53.346483 kernel: raid6: neonx1 gen() 10435 MB/s Dec 12 22:50:53.346491 kernel: raid6: int64x8 gen() 6834 MB/s Dec 12 22:50:53.346499 kernel: raid6: int64x4 gen() 7349 MB/s Dec 12 22:50:53.346506 kernel: raid6: int64x2 gen() 6098 MB/s Dec 12 22:50:53.346514 kernel: raid6: int64x1 gen() 5034 MB/s Dec 12 22:50:53.346532 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s Dec 12 22:50:53.346540 kernel: raid6: .... xor() 12042 MB/s, rmw enabled Dec 12 22:50:53.346547 kernel: raid6: using neon recovery algorithm Dec 12 22:50:53.346555 kernel: xor: measuring software checksum speed Dec 12 22:50:53.346564 kernel: 8regs : 21573 MB/sec Dec 12 22:50:53.346572 kernel: 32regs : 21641 MB/sec Dec 12 22:50:53.346579 kernel: arm64_neon : 21404 MB/sec Dec 12 22:50:53.346586 kernel: xor: using function: 32regs (21641 MB/sec) Dec 12 22:50:53.346594 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 22:50:53.346602 kernel: BTRFS: device fsid 9461608f-358c-4289-9d88-a1064c902382 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (205) Dec 12 22:50:53.346610 kernel: BTRFS info (device dm-0): first mount of filesystem 9461608f-358c-4289-9d88-a1064c902382 Dec 12 22:50:53.346619 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 22:50:53.346626 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 22:50:53.346634 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 22:50:53.346641 kernel: loop: module loaded Dec 12 22:50:53.346649 kernel: loop0: detected capacity change from 0 to 91832 Dec 12 22:50:53.346657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 22:50:53.346666 systemd[1]: Successfully made /usr/ read-only. Dec 12 22:50:53.346677 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 22:50:53.346686 systemd[1]: Detected virtualization kvm. Dec 12 22:50:53.346701 systemd[1]: Detected architecture arm64. Dec 12 22:50:53.346709 systemd[1]: Running in initrd. Dec 12 22:50:53.346716 systemd[1]: No hostname configured, using default hostname. Dec 12 22:50:53.346727 systemd[1]: Hostname set to . Dec 12 22:50:53.346734 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 22:50:53.346742 systemd[1]: Queued start job for default target initrd.target. Dec 12 22:50:53.346750 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 22:50:53.346758 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 22:50:53.346767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 22:50:53.346775 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 22:50:53.346785 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 22:50:53.346794 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 22:50:53.346802 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 22:50:53.346810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 22:50:53.346818 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 22:50:53.346828 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 22:50:53.346836 systemd[1]: Reached target paths.target - Path Units. Dec 12 22:50:53.346844 systemd[1]: Reached target slices.target - Slice Units. Dec 12 22:50:53.346852 systemd[1]: Reached target swap.target - Swaps. Dec 12 22:50:53.346860 systemd[1]: Reached target timers.target - Timer Units. Dec 12 22:50:53.346868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 22:50:53.346876 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 22:50:53.346889 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 22:50:53.346898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 22:50:53.346926 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 22:50:53.346934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 22:50:53.346943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 22:50:53.346958 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 22:50:53.346968 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 22:50:53.346977 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 22:50:53.346985 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 22:50:53.346994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 22:50:53.347002 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 22:50:53.347011 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 22:50:53.347020 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 22:50:53.347029 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 22:50:53.347038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 22:50:53.347047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 22:50:53.347057 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 22:50:53.347065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 22:50:53.347074 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 22:50:53.347083 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 22:50:53.347091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 22:50:53.347122 systemd-journald[348]: Collecting audit messages is enabled. Dec 12 22:50:53.347143 kernel: Bridge firewalling registered Dec 12 22:50:53.347152 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 22:50:53.347161 kernel: audit: type=1130 audit(1765579853.343:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.347171 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 22:50:53.347180 systemd-journald[348]: Journal started Dec 12 22:50:53.347199 systemd-journald[348]: Runtime Journal (/run/log/journal/17705fabc2b04e4ca812dec1ca5f9984) is 6M, max 48.5M, 42.4M free. Dec 12 22:50:53.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.341230 systemd-modules-load[349]: Inserted module 'br_netfilter' Dec 12 22:50:53.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.353548 kernel: audit: type=1130 audit(1765579853.349:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.353585 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 22:50:53.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.357489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 22:50:53.358600 kernel: audit: type=1130 audit(1765579853.354:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.362556 kernel: audit: type=1130 audit(1765579853.359:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.362944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 22:50:53.366086 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 22:50:53.376337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 22:50:53.380517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 22:50:53.385783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 22:50:53.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.390679 systemd-tmpfiles[370]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 22:50:53.393553 kernel: audit: type=1130 audit(1765579853.388:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.393582 kernel: audit: type=1334 audit(1765579853.390:7): prog-id=6 op=LOAD Dec 12 22:50:53.390000 audit: BPF prog-id=6 op=LOAD Dec 12 22:50:53.392663 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 22:50:53.395685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 22:50:53.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.400227 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 22:50:53.404855 kernel: audit: type=1130 audit(1765579853.396:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.404887 kernel: audit: type=1130 audit(1765579853.400:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.407082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 22:50:53.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.410157 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 22:50:53.413750 kernel: audit: type=1130 audit(1765579853.407:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.445104 systemd-resolved[384]: Positive Trust Anchors: Dec 12 22:50:53.445120 systemd-resolved[384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 22:50:53.447569 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=319ee2439da9a842de2c25b49d70f7b5c5214dd0f4053f60c327b072c645cb1a Dec 12 22:50:53.445124 systemd-resolved[384]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 22:50:53.445155 systemd-resolved[384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 22:50:53.470837 systemd-resolved[384]: Defaulting to hostname 'linux'. Dec 12 22:50:53.471763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 22:50:53.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.473384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 22:50:53.521560 kernel: Loading iSCSI transport class v2.0-870. Dec 12 22:50:53.530579 kernel: iscsi: registered transport (tcp) Dec 12 22:50:53.543656 kernel: iscsi: registered transport (qla4xxx) Dec 12 22:50:53.543713 kernel: QLogic iSCSI HBA Driver Dec 12 22:50:53.565463 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 22:50:53.582309 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 22:50:53.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.585673 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 22:50:53.630349 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 22:50:53.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.633509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 22:50:53.635078 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 22:50:53.665695 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 22:50:53.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.667000 audit: BPF prog-id=7 op=LOAD Dec 12 22:50:53.667000 audit: BPF prog-id=8 op=LOAD Dec 12 22:50:53.668207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 22:50:53.694035 systemd-udevd[629]: Using default interface naming scheme 'v257'. Dec 12 22:50:53.701817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 22:50:53.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.705089 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 22:50:53.734956 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Dec 12 22:50:53.736814 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 22:50:53.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.739000 audit: BPF prog-id=9 op=LOAD Dec 12 22:50:53.740149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 22:50:53.759804 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 22:50:53.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.762719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 22:50:53.787485 systemd-networkd[741]: lo: Link UP Dec 12 22:50:53.787492 systemd-networkd[741]: lo: Gained carrier Dec 12 22:50:53.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.788172 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 22:50:53.789407 systemd[1]: Reached target network.target - Network. Dec 12 22:50:53.824971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 22:50:53.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.828464 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 22:50:53.867208 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 22:50:53.882182 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 22:50:53.902382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 22:50:53.911627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 22:50:53.916903 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 22:50:53.928376 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 22:50:53.928510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 22:50:53.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.931980 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 22:50:53.934844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 22:50:53.939659 disk-uuid[802]: Primary Header is updated. Dec 12 22:50:53.939659 disk-uuid[802]: Secondary Entries is updated. Dec 12 22:50:53.939659 disk-uuid[802]: Secondary Header is updated. Dec 12 22:50:53.942648 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 22:50:53.942659 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 22:50:53.943126 systemd-networkd[741]: eth0: Link UP Dec 12 22:50:53.944235 systemd-networkd[741]: eth0: Gained carrier Dec 12 22:50:53.944247 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 22:50:53.959633 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 22:50:53.972931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 22:50:53.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.986185 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 22:50:53.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:53.995778 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 22:50:53.996957 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 22:50:53.999365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 22:50:54.002252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 22:50:54.023221 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 22:50:54.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:54.976367 disk-uuid[804]: Warning: The kernel is still using the old partition table. Dec 12 22:50:54.976367 disk-uuid[804]: The new table will be used at the next reboot or after you Dec 12 22:50:54.976367 disk-uuid[804]: run partprobe(8) or kpartx(8) Dec 12 22:50:54.976367 disk-uuid[804]: The operation has completed successfully. Dec 12 22:50:54.985810 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 22:50:54.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:54.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:54.985922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 22:50:54.988497 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 22:50:55.022382 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (831) Dec 12 22:50:55.022429 kernel: BTRFS info (device vda6): first mount of filesystem 28768939-a818-41bf-834f-fc0d273267f7 Dec 12 22:50:55.023909 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 22:50:55.026611 kernel: BTRFS info (device vda6): turning on async discard Dec 12 22:50:55.026633 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 22:50:55.032545 kernel: BTRFS info (device vda6): last unmount of filesystem 28768939-a818-41bf-834f-fc0d273267f7 Dec 12 22:50:55.034610 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 22:50:55.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.036942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 22:50:55.125045 ignition[850]: Ignition 2.24.0 Dec 12 22:50:55.125930 ignition[850]: Stage: fetch-offline Dec 12 22:50:55.125993 ignition[850]: no configs at "/usr/lib/ignition/base.d" Dec 12 22:50:55.126004 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 22:50:55.126162 ignition[850]: parsed url from cmdline: "" Dec 12 22:50:55.126165 ignition[850]: no config URL provided Dec 12 22:50:55.126170 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 22:50:55.126178 ignition[850]: no config at "/usr/lib/ignition/user.ign" Dec 12 22:50:55.126216 ignition[850]: op(1): [started] loading QEMU firmware config module Dec 12 22:50:55.126220 ignition[850]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 22:50:55.132507 ignition[850]: op(1): [finished] loading QEMU firmware config module Dec 12 22:50:55.197280 ignition[850]: parsing config with SHA512: f8842309d4a617c21c55ab9d61219929a5a5f54bd404c5f6c08da767d40b6eae564fc2d430aa0201f46d467f1e6a66cedcc39d464fc34f37c0e26a13e763cf55 Dec 12 22:50:55.204622 unknown[850]: fetched base config from "system" Dec 12 22:50:55.204651 unknown[850]: fetched user config from "qemu" Dec 12 22:50:55.205063 ignition[850]: fetch-offline: fetch-offline passed Dec 12 22:50:55.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.207291 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 22:50:55.205225 ignition[850]: Ignition finished successfully Dec 12 22:50:55.208808 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 22:50:55.209635 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 22:50:55.237144 ignition[863]: Ignition 2.24.0 Dec 12 22:50:55.237163 ignition[863]: Stage: kargs Dec 12 22:50:55.237329 ignition[863]: no configs at "/usr/lib/ignition/base.d" Dec 12 22:50:55.237338 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 22:50:55.241124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 22:50:55.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.238259 ignition[863]: kargs: kargs passed Dec 12 22:50:55.243565 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 22:50:55.238306 ignition[863]: Ignition finished successfully Dec 12 22:50:55.270269 ignition[870]: Ignition 2.24.0 Dec 12 22:50:55.270288 ignition[870]: Stage: disks Dec 12 22:50:55.270437 ignition[870]: no configs at "/usr/lib/ignition/base.d" Dec 12 22:50:55.270445 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 22:50:55.271236 ignition[870]: disks: disks passed Dec 12 22:50:55.271281 ignition[870]: Ignition finished successfully Dec 12 22:50:55.275066 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 22:50:55.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.276356 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 22:50:55.278105 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 22:50:55.279999 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 22:50:55.281920 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 22:50:55.283483 systemd[1]: Reached target basic.target - Basic System. Dec 12 22:50:55.286161 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 22:50:55.329530 systemd-fsck[878]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 12 22:50:55.333953 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 22:50:55.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.336995 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 22:50:55.414557 kernel: EXT4-fs (vda9): mounted filesystem fd3a0b75-0f9c-475b-b991-8ab489e31346 r/w with ordered data mode. Quota mode: none. Dec 12 22:50:55.414846 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 22:50:55.416108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 22:50:55.418645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 22:50:55.420596 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 22:50:55.421791 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 22:50:55.421834 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 22:50:55.421862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 22:50:55.439464 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 22:50:55.442313 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 22:50:55.447232 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Dec 12 22:50:55.447256 kernel: BTRFS info (device vda6): first mount of filesystem 28768939-a818-41bf-834f-fc0d273267f7 Dec 12 22:50:55.447266 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 22:50:55.449902 kernel: BTRFS info (device vda6): turning on async discard Dec 12 22:50:55.449936 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 22:50:55.450981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 22:50:55.565356 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 22:50:55.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.569628 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 22:50:55.571446 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 22:50:55.585085 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 22:50:55.586573 kernel: BTRFS info (device vda6): last unmount of filesystem 28768939-a818-41bf-834f-fc0d273267f7 Dec 12 22:50:55.603691 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 22:50:55.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.615366 ignition[987]: INFO : Ignition 2.24.0 Dec 12 22:50:55.615366 ignition[987]: INFO : Stage: mount Dec 12 22:50:55.617933 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 22:50:55.617933 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 22:50:55.617933 ignition[987]: INFO : mount: mount passed Dec 12 22:50:55.617933 ignition[987]: INFO : Ignition finished successfully Dec 12 22:50:55.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:55.618738 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 22:50:55.620591 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 22:50:55.768751 systemd-networkd[741]: eth0: Gained IPv6LL Dec 12 22:50:56.416514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 22:50:56.452171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Dec 12 22:50:56.452223 kernel: BTRFS info (device vda6): first mount of filesystem 28768939-a818-41bf-834f-fc0d273267f7 Dec 12 22:50:56.452246 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 22:50:56.456127 kernel: BTRFS info (device vda6): turning on async discard Dec 12 22:50:56.456190 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 22:50:56.457571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 22:50:56.490547 ignition[1014]: INFO : Ignition 2.24.0 Dec 12 22:50:56.490547 ignition[1014]: INFO : Stage: files Dec 12 22:50:56.492483 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 22:50:56.492483 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 22:50:56.492483 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Dec 12 22:50:56.497955 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 22:50:56.497955 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 22:50:56.497955 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 22:50:56.497955 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 22:50:56.497955 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 22:50:56.497839 unknown[1014]: wrote ssh authorized keys file for user: core Dec 12 22:50:56.507737 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 22:50:56.507737 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 22:50:56.554937 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 22:50:56.690260 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 22:50:56.690260 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 22:50:56.694562 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 22:50:56.711714 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 22:50:56.711714 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 22:50:56.711714 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 22:50:57.088442 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 22:50:57.322969 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 22:50:57.322969 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 22:50:57.327068 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 22:50:57.342537 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 22:50:57.346569 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 22:50:57.348074 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 22:50:57.348074 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 22:50:57.348074 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 22:50:57.348074 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 22:50:57.348074 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 22:50:57.348074 ignition[1014]: INFO : files: files passed Dec 12 22:50:57.348074 ignition[1014]: INFO : Ignition finished successfully Dec 12 22:50:57.362435 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 12 22:50:57.362465 kernel: audit: type=1130 audit(1765579857.350:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.348997 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 22:50:57.352596 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 22:50:57.361934 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 22:50:57.367110 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 22:50:57.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.367211 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 22:50:57.376112 kernel: audit: type=1130 audit(1765579857.368:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.376139 kernel: audit: type=1131 audit(1765579857.368:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.378045 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 22:50:57.380721 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 22:50:57.380721 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 22:50:57.386442 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 22:50:57.390646 kernel: audit: type=1130 audit(1765579857.386:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.384147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 22:50:57.387148 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 22:50:57.392480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 22:50:57.447748 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 22:50:57.447886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 22:50:57.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.450318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 22:50:57.458322 kernel: audit: type=1130 audit(1765579857.449:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.458347 kernel: audit: type=1131 audit(1765579857.449:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.457302 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 22:50:57.459495 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 22:50:57.460501 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 22:50:57.486686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 22:50:57.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.489611 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 22:50:57.493900 kernel: audit: type=1130 audit(1765579857.487:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.512455 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 22:50:57.512696 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 22:50:57.514888 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 22:50:57.516941 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 22:50:57.518661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 22:50:57.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.518814 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 22:50:57.526058 kernel: audit: type=1131 audit(1765579857.519:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.523875 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 22:50:57.524986 systemd[1]: Stopped target basic.target - Basic System. Dec 12 22:50:57.526948 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 22:50:57.528795 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 22:50:57.530624 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 22:50:57.532789 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 22:50:57.534789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 22:50:57.536533 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 22:50:57.538760 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 22:50:57.540563 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 22:50:57.542971 systemd[1]: Stopped target swap.target - Swaps. Dec 12 22:50:57.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.544714 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 22:50:57.550808 kernel: audit: type=1131 audit(1765579857.546:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.544865 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 22:50:57.549969 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 22:50:57.551908 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 22:50:57.554030 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 22:50:57.557647 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 22:50:57.558976 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 22:50:57.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.559112 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 22:50:57.565458 kernel: audit: type=1131 audit(1765579857.561:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.564476 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 22:50:57.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.564635 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 22:50:57.566686 systemd[1]: Stopped target paths.target - Path Units. Dec 12 22:50:57.568326 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 22:50:57.568465 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 22:50:57.570466 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 22:50:57.572045 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 22:50:57.573830 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 22:50:57.573918 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 22:50:57.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.575895 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 22:50:57.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.575975 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 22:50:57.577557 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 22:50:57.577632 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 22:50:57.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.579423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 22:50:57.579555 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 22:50:57.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.581214 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 22:50:57.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.581316 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 22:50:57.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.583924 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 22:50:57.585548 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 22:50:57.585679 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 22:50:57.588642 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 22:50:57.590349 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 22:50:57.590474 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 22:50:57.593019 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 22:50:57.593136 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 22:50:57.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.595286 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 22:50:57.595397 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 22:50:57.605607 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 22:50:57.605711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 22:50:57.612473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 22:50:57.615450 ignition[1071]: INFO : Ignition 2.24.0 Dec 12 22:50:57.615450 ignition[1071]: INFO : Stage: umount Dec 12 22:50:57.617408 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 22:50:57.617408 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 22:50:57.619748 ignition[1071]: INFO : umount: umount passed Dec 12 22:50:57.619748 ignition[1071]: INFO : Ignition finished successfully Dec 12 22:50:57.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.619855 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 22:50:57.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.619991 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 22:50:57.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.621876 systemd[1]: Stopped target network.target - Network. Dec 12 22:50:57.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.622751 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 22:50:57.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.622814 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 22:50:57.624580 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 22:50:57.624630 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 22:50:57.626615 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 22:50:57.626669 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 22:50:57.628477 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 22:50:57.628542 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 22:50:57.630496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 22:50:57.633635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 22:50:57.645880 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 22:50:57.646004 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 22:50:57.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.650000 audit: BPF prog-id=9 op=UNLOAD Dec 12 22:50:57.651054 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 22:50:57.651164 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 22:50:57.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.654881 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 22:50:57.655010 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 22:50:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.659480 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 22:50:57.659000 audit: BPF prog-id=6 op=UNLOAD Dec 12 22:50:57.661650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 22:50:57.661718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 22:50:57.663670 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 22:50:57.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.663740 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 22:50:57.666434 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 22:50:57.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.667401 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 22:50:57.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.667467 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 22:50:57.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.669865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 22:50:57.669911 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 22:50:57.671762 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 22:50:57.671805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 22:50:57.673663 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 22:50:57.692956 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 22:50:57.700731 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 22:50:57.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.702370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 22:50:57.702410 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 22:50:57.704107 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 22:50:57.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.704144 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 22:50:57.705873 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 22:50:57.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.705931 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 22:50:57.708436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 22:50:57.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.708490 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 22:50:57.711284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 22:50:57.711341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 22:50:57.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.715057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 22:50:57.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.716085 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 22:50:57.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.716146 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 22:50:57.718071 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 22:50:57.718124 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 22:50:57.720248 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 22:50:57.720295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 22:50:57.722850 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 22:50:57.738111 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 22:50:57.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.743800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 22:50:57.743925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 22:50:57.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:57.746056 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 22:50:57.759108 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 22:50:57.767981 systemd[1]: Switching root. Dec 12 22:50:57.796307 systemd-journald[348]: Journal stopped Dec 12 22:50:58.628104 systemd-journald[348]: Received SIGTERM from PID 1 (systemd). Dec 12 22:50:58.628154 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 22:50:58.628171 kernel: SELinux: policy capability open_perms=1 Dec 12 22:50:58.628181 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 22:50:58.628191 kernel: SELinux: policy capability always_check_network=0 Dec 12 22:50:58.628201 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 22:50:58.628211 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 22:50:58.628226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 22:50:58.628241 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 22:50:58.628251 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 22:50:58.628264 systemd[1]: Successfully loaded SELinux policy in 65.707ms. Dec 12 22:50:58.628281 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.920ms. Dec 12 22:50:58.628296 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 22:50:58.628307 systemd[1]: Detected virtualization kvm. Dec 12 22:50:58.628318 systemd[1]: Detected architecture arm64. Dec 12 22:50:58.628329 systemd[1]: Detected first boot. Dec 12 22:50:58.628340 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 22:50:58.628351 kernel: NET: Registered PF_VSOCK protocol family Dec 12 22:50:58.628361 zram_generator::config[1117]: No configuration found. Dec 12 22:50:58.628374 systemd[1]: Populated /etc with preset unit settings. Dec 12 22:50:58.628385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 22:50:58.628396 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 22:50:58.628408 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 22:50:58.628424 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 22:50:58.628435 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 22:50:58.628446 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 22:50:58.628457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 22:50:58.628468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 22:50:58.628480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 22:50:58.628492 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 22:50:58.628503 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 22:50:58.628513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 22:50:58.628566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 22:50:58.628581 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 22:50:58.628592 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 22:50:58.628603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 22:50:58.628617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 22:50:58.628628 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 22:50:58.628640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 22:50:58.628650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 22:50:58.628661 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 22:50:58.628680 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 22:50:58.628696 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 22:50:58.628707 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 22:50:58.628718 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 22:50:58.628729 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 22:50:58.628740 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 22:50:58.628751 systemd[1]: Reached target slices.target - Slice Units. Dec 12 22:50:58.628761 systemd[1]: Reached target swap.target - Swaps. Dec 12 22:50:58.628773 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 22:50:58.628783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 22:50:58.628794 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 22:50:58.628805 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 22:50:58.628816 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 22:50:58.628827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 22:50:58.628837 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 22:50:58.628850 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 22:50:58.628861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 22:50:58.628872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 22:50:58.628883 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 22:50:58.628894 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 22:50:58.628905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 22:50:58.628915 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 22:50:58.628928 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 22:50:58.628939 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 22:50:58.628950 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 22:50:58.628961 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 22:50:58.628972 systemd[1]: Reached target machines.target - Containers. Dec 12 22:50:58.628983 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 22:50:58.628994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 22:50:58.629006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 22:50:58.629017 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 22:50:58.629030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 22:50:58.629040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 22:50:58.629051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 22:50:58.629062 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 22:50:58.629073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 22:50:58.629086 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 22:50:58.629096 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 22:50:58.629107 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 22:50:58.629118 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 22:50:58.629128 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 22:50:58.629140 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 22:50:58.629153 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 22:50:58.629163 kernel: fuse: init (API version 7.41) Dec 12 22:50:58.629174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 22:50:58.629186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 22:50:58.629197 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 22:50:58.629209 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 22:50:58.629219 kernel: ACPI: bus type drm_connector registered Dec 12 22:50:58.629229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 22:50:58.629240 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 22:50:58.629251 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 22:50:58.629262 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 22:50:58.629273 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 22:50:58.629286 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 22:50:58.629297 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 22:50:58.629327 systemd-journald[1193]: Collecting audit messages is enabled. Dec 12 22:50:58.629350 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 22:50:58.629361 systemd-journald[1193]: Journal started Dec 12 22:50:58.629385 systemd-journald[1193]: Runtime Journal (/run/log/journal/17705fabc2b04e4ca812dec1ca5f9984) is 6M, max 48.5M, 42.4M free. Dec 12 22:50:58.465000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 22:50:58.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.581000 audit: BPF prog-id=14 op=UNLOAD Dec 12 22:50:58.581000 audit: BPF prog-id=13 op=UNLOAD Dec 12 22:50:58.582000 audit: BPF prog-id=15 op=LOAD Dec 12 22:50:58.582000 audit: BPF prog-id=16 op=LOAD Dec 12 22:50:58.582000 audit: BPF prog-id=17 op=LOAD Dec 12 22:50:58.627000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 22:50:58.627000 audit[1193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffdf1e9430 a2=4000 a3=0 items=0 ppid=1 pid=1193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:50:58.627000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 22:50:58.375575 systemd[1]: Queued start job for default target multi-user.target. Dec 12 22:50:58.384395 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 22:50:58.384815 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 22:50:58.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.632540 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 22:50:58.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.633483 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 22:50:58.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.634931 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 22:50:58.636605 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 22:50:58.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.637949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 22:50:58.638136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 22:50:58.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.639454 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 22:50:58.639679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 22:50:58.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.640963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 22:50:58.641140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 22:50:58.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.642725 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 22:50:58.642883 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 22:50:58.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.644402 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 22:50:58.644591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 22:50:58.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.645883 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 22:50:58.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.647314 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 22:50:58.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.649466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 22:50:58.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.651133 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 22:50:58.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.663356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 22:50:58.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.665168 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 22:50:58.666592 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 22:50:58.667724 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 22:50:58.667755 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 22:50:58.669472 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 22:50:58.671187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 22:50:58.671297 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 22:50:58.672617 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 22:50:58.674423 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 22:50:58.675567 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 22:50:58.676426 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 22:50:58.677511 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 22:50:58.686470 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 22:50:58.691680 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 22:50:58.693531 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 22:50:58.694971 systemd-journald[1193]: Time spent on flushing to /var/log/journal/17705fabc2b04e4ca812dec1ca5f9984 is 29.074ms for 999 entries. Dec 12 22:50:58.694971 systemd-journald[1193]: System Journal (/var/log/journal/17705fabc2b04e4ca812dec1ca5f9984) is 8M, max 163.5M, 155.5M free. Dec 12 22:50:58.735940 systemd-journald[1193]: Received client request to flush runtime journal. Dec 12 22:50:58.735992 kernel: loop1: detected capacity change from 0 to 353272 Dec 12 22:50:58.736008 kernel: loop1: p1 p2 p3 Dec 12 22:50:58.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.723000 audit: BPF prog-id=18 op=LOAD Dec 12 22:50:58.723000 audit: BPF prog-id=19 op=LOAD Dec 12 22:50:58.723000 audit: BPF prog-id=20 op=LOAD Dec 12 22:50:58.726000 audit: BPF prog-id=21 op=LOAD Dec 12 22:50:58.733000 audit: BPF prog-id=22 op=LOAD Dec 12 22:50:58.733000 audit: BPF prog-id=23 op=LOAD Dec 12 22:50:58.733000 audit: BPF prog-id=24 op=LOAD Dec 12 22:50:58.695594 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 22:50:58.736000 audit: BPF prog-id=25 op=LOAD Dec 12 22:50:58.697787 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 22:50:58.736000 audit: BPF prog-id=26 op=LOAD Dec 12 22:50:58.736000 audit: BPF prog-id=27 op=LOAD Dec 12 22:50:58.700513 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 22:50:58.716083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 22:50:58.721268 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 22:50:58.724778 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 22:50:58.727742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 22:50:58.729993 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 22:50:58.734707 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 22:50:58.738826 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 22:50:58.741591 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 22:50:58.742552 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Dec 12 22:50:58.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.748882 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 22:50:58.762558 kernel: loop2: detected capacity change from 0 to 161080 Dec 12 22:50:58.764553 kernel: loop2: p1 p2 p3 Dec 12 22:50:58.768635 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Dec 12 22:50:58.768656 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Dec 12 22:50:58.772985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 22:50:58.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.781542 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 12 22:50:58.785811 systemd-nsresourced[1247]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 22:50:58.787408 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 22:50:58.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.792080 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 22:50:58.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.796542 kernel: loop3: detected capacity change from 0 to 207008 Dec 12 22:50:58.825557 kernel: loop4: detected capacity change from 0 to 353272 Dec 12 22:50:58.828546 kernel: loop4: p1 p2 p3 Dec 12 22:50:58.838620 systemd-oomd[1242]: No swap; memory pressure usage will be degraded Dec 12 22:50:58.839362 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 22:50:58.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.842604 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 22:50:58.842652 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 12 22:50:58.842666 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 12 22:50:58.844214 (sd-merge)[1274]: device-mapper: reload ioctl on ed0b824f9295294cb3542232bcbee15ee3d793bc3d307e21c5cfeb13c84c1736-verity (253:1) failed: Invalid argument Dec 12 22:50:58.844546 kernel: device-mapper: ioctl: error adding target to table Dec 12 22:50:58.845072 systemd-resolved[1243]: Positive Trust Anchors: Dec 12 22:50:58.845312 systemd-resolved[1243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 22:50:58.845364 systemd-resolved[1243]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 22:50:58.845430 systemd-resolved[1243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 22:50:58.850456 systemd-resolved[1243]: Defaulting to hostname 'linux'. Dec 12 22:50:58.851999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 22:50:58.852556 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 22:50:58.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:58.853254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 22:50:59.108643 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 22:50:59.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.109000 audit: BPF prog-id=8 op=UNLOAD Dec 12 22:50:59.109000 audit: BPF prog-id=7 op=UNLOAD Dec 12 22:50:59.110000 audit: BPF prog-id=28 op=LOAD Dec 12 22:50:59.110000 audit: BPF prog-id=29 op=LOAD Dec 12 22:50:59.111303 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 22:50:59.149197 systemd-udevd[1279]: Using default interface naming scheme 'v257'. Dec 12 22:50:59.164400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 22:50:59.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.168681 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 22:50:59.169000 audit: BPF prog-id=30 op=LOAD Dec 12 22:50:59.171691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 22:50:59.175110 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 22:50:59.190006 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 22:50:59.190222 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 22:50:59.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.192975 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 22:50:59.193195 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 22:50:59.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.226942 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 22:50:59.228817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 22:50:59.232666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 22:50:59.252317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 22:50:59.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.256720 systemd-networkd[1292]: lo: Link UP Dec 12 22:50:59.256729 systemd-networkd[1292]: lo: Gained carrier Dec 12 22:50:59.257534 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 22:50:59.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.258590 systemd[1]: Reached target network.target - Network. Dec 12 22:50:59.260103 systemd-networkd[1292]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 22:50:59.260112 systemd-networkd[1292]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 22:50:59.260683 systemd-networkd[1292]: eth0: Link UP Dec 12 22:50:59.260813 systemd-networkd[1292]: eth0: Gained carrier Dec 12 22:50:59.260827 systemd-networkd[1292]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 22:50:59.261858 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 22:50:59.264128 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 22:50:59.273595 systemd-networkd[1292]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 22:50:59.283322 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 22:50:59.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.338649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 22:50:59.349546 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 12 22:50:59.351546 kernel: loop5: detected capacity change from 0 to 161080 Dec 12 22:50:59.352540 kernel: loop5: p1 p2 p3 Dec 12 22:50:59.363623 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 22:50:59.363690 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 12 22:50:59.365641 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 12 22:50:59.366718 kernel: device-mapper: ioctl: error adding target to table Dec 12 22:50:59.367691 (sd-merge)[1274]: device-mapper: reload ioctl on 58be82011d4f3881d3ec821e4b3977bc281fffc2f3ce2062c2878a2d0c7e695d-verity (253:2) failed: Invalid argument Dec 12 22:50:59.375544 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 22:50:59.384228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 22:50:59.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.387334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 22:50:59.389124 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 22:50:59.392287 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 22:50:59.396604 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 12 22:50:59.398554 kernel: loop6: detected capacity change from 0 to 207008 Dec 12 22:50:59.405128 (sd-merge)[1274]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 12 22:50:59.407678 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 22:50:59.408012 (sd-merge)[1274]: Merged extensions into '/usr'. Dec 12 22:50:59.409518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 22:50:59.413696 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 22:50:59.413710 systemd[1]: Reloading... Dec 12 22:50:59.456609 zram_generator::config[1383]: No configuration found. Dec 12 22:50:59.624154 systemd[1]: Reloading finished in 210 ms. Dec 12 22:50:59.658556 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 22:50:59.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.680891 systemd[1]: Starting ensure-sysext.service... Dec 12 22:50:59.682695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 22:50:59.684000 audit: BPF prog-id=31 op=LOAD Dec 12 22:50:59.684000 audit: BPF prog-id=22 op=UNLOAD Dec 12 22:50:59.684000 audit: BPF prog-id=32 op=LOAD Dec 12 22:50:59.684000 audit: BPF prog-id=33 op=LOAD Dec 12 22:50:59.684000 audit: BPF prog-id=23 op=UNLOAD Dec 12 22:50:59.684000 audit: BPF prog-id=24 op=UNLOAD Dec 12 22:50:59.684000 audit: BPF prog-id=34 op=LOAD Dec 12 22:50:59.684000 audit: BPF prog-id=18 op=UNLOAD Dec 12 22:50:59.684000 audit: BPF prog-id=35 op=LOAD Dec 12 22:50:59.684000 audit: BPF prog-id=36 op=LOAD Dec 12 22:50:59.684000 audit: BPF prog-id=19 op=UNLOAD Dec 12 22:50:59.684000 audit: BPF prog-id=20 op=UNLOAD Dec 12 22:50:59.685000 audit: BPF prog-id=37 op=LOAD Dec 12 22:50:59.685000 audit: BPF prog-id=30 op=UNLOAD Dec 12 22:50:59.686000 audit: BPF prog-id=38 op=LOAD Dec 12 22:50:59.686000 audit: BPF prog-id=15 op=UNLOAD Dec 12 22:50:59.686000 audit: BPF prog-id=39 op=LOAD Dec 12 22:50:59.686000 audit: BPF prog-id=40 op=LOAD Dec 12 22:50:59.686000 audit: BPF prog-id=16 op=UNLOAD Dec 12 22:50:59.686000 audit: BPF prog-id=17 op=UNLOAD Dec 12 22:50:59.687000 audit: BPF prog-id=41 op=LOAD Dec 12 22:50:59.687000 audit: BPF prog-id=21 op=UNLOAD Dec 12 22:50:59.687000 audit: BPF prog-id=42 op=LOAD Dec 12 22:50:59.687000 audit: BPF prog-id=25 op=UNLOAD Dec 12 22:50:59.688000 audit: BPF prog-id=43 op=LOAD Dec 12 22:50:59.688000 audit: BPF prog-id=44 op=LOAD Dec 12 22:50:59.688000 audit: BPF prog-id=26 op=UNLOAD Dec 12 22:50:59.688000 audit: BPF prog-id=27 op=UNLOAD Dec 12 22:50:59.688000 audit: BPF prog-id=45 op=LOAD Dec 12 22:50:59.688000 audit: BPF prog-id=46 op=LOAD Dec 12 22:50:59.688000 audit: BPF prog-id=28 op=UNLOAD Dec 12 22:50:59.688000 audit: BPF prog-id=29 op=UNLOAD Dec 12 22:50:59.693573 systemd[1]: Reload requested from client PID 1416 ('systemctl') (unit ensure-sysext.service)... Dec 12 22:50:59.693587 systemd[1]: Reloading... Dec 12 22:50:59.697414 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 22:50:59.697455 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 22:50:59.697728 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 22:50:59.698659 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Dec 12 22:50:59.698730 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Dec 12 22:50:59.704107 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 22:50:59.704120 systemd-tmpfiles[1417]: Skipping /boot Dec 12 22:50:59.710720 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 22:50:59.710734 systemd-tmpfiles[1417]: Skipping /boot Dec 12 22:50:59.738611 zram_generator::config[1454]: No configuration found. Dec 12 22:50:59.903457 systemd[1]: Reloading finished in 209 ms. Dec 12 22:50:59.924000 audit: BPF prog-id=47 op=LOAD Dec 12 22:50:59.924000 audit: BPF prog-id=31 op=UNLOAD Dec 12 22:50:59.924000 audit: BPF prog-id=48 op=LOAD Dec 12 22:50:59.924000 audit: BPF prog-id=49 op=LOAD Dec 12 22:50:59.924000 audit: BPF prog-id=32 op=UNLOAD Dec 12 22:50:59.924000 audit: BPF prog-id=33 op=UNLOAD Dec 12 22:50:59.925000 audit: BPF prog-id=50 op=LOAD Dec 12 22:50:59.925000 audit: BPF prog-id=37 op=UNLOAD Dec 12 22:50:59.925000 audit: BPF prog-id=51 op=LOAD Dec 12 22:50:59.925000 audit: BPF prog-id=34 op=UNLOAD Dec 12 22:50:59.925000 audit: BPF prog-id=52 op=LOAD Dec 12 22:50:59.925000 audit: BPF prog-id=53 op=LOAD Dec 12 22:50:59.925000 audit: BPF prog-id=35 op=UNLOAD Dec 12 22:50:59.925000 audit: BPF prog-id=36 op=UNLOAD Dec 12 22:50:59.926000 audit: BPF prog-id=54 op=LOAD Dec 12 22:50:59.926000 audit: BPF prog-id=38 op=UNLOAD Dec 12 22:50:59.926000 audit: BPF prog-id=55 op=LOAD Dec 12 22:50:59.926000 audit: BPF prog-id=56 op=LOAD Dec 12 22:50:59.926000 audit: BPF prog-id=39 op=UNLOAD Dec 12 22:50:59.926000 audit: BPF prog-id=40 op=UNLOAD Dec 12 22:50:59.926000 audit: BPF prog-id=57 op=LOAD Dec 12 22:50:59.926000 audit: BPF prog-id=42 op=UNLOAD Dec 12 22:50:59.926000 audit: BPF prog-id=58 op=LOAD Dec 12 22:50:59.927000 audit: BPF prog-id=59 op=LOAD Dec 12 22:50:59.927000 audit: BPF prog-id=43 op=UNLOAD Dec 12 22:50:59.927000 audit: BPF prog-id=44 op=UNLOAD Dec 12 22:50:59.928000 audit: BPF prog-id=60 op=LOAD Dec 12 22:50:59.944000 audit: BPF prog-id=41 op=UNLOAD Dec 12 22:50:59.944000 audit: BPF prog-id=61 op=LOAD Dec 12 22:50:59.944000 audit: BPF prog-id=62 op=LOAD Dec 12 22:50:59.944000 audit: BPF prog-id=45 op=UNLOAD Dec 12 22:50:59.944000 audit: BPF prog-id=46 op=UNLOAD Dec 12 22:50:59.947170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 22:50:59.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:50:59.957300 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 22:50:59.959773 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 22:50:59.970819 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 22:50:59.973497 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 22:50:59.975768 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 22:50:59.979986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 22:50:59.981373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 22:50:59.984791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 22:50:59.987376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 22:50:59.988839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 22:50:59.989049 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 22:50:59.989143 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 22:50:59.991304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 22:50:59.991480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 22:50:59.991647 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 22:50:59.991743 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 22:50:59.994512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 22:50:59.995614 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 22:50:59.996938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 22:50:59.997165 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 22:50:59.997249 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 22:51:00.000574 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 22:51:00.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.003383 systemd[1]: Finished ensure-sysext.service. Dec 12 22:51:00.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.005199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 22:51:00.005400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 22:51:00.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.008226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 22:51:00.008432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 22:51:00.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.012157 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 22:51:00.012355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 22:51:00.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.018319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 22:51:00.018414 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 22:51:00.019000 audit[1493]: SYSTEM_BOOT pid=1493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.020000 audit: BPF prog-id=63 op=LOAD Dec 12 22:51:00.023775 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 22:51:00.025513 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 22:51:00.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.027098 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 22:51:00.027299 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 22:51:00.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.033760 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 22:51:00.034968 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 22:51:00.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:00.038000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 22:51:00.038000 audit[1524]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda3880f0 a2=420 a3=0 items=0 ppid=1486 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:00.038000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 22:51:00.039348 augenrules[1524]: No rules Dec 12 22:51:00.040679 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 22:51:00.042787 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 22:51:00.079482 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 22:51:00.080222 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 22:51:00.080268 systemd-timesyncd[1516]: Initial clock synchronization to Fri 2025-12-12 22:51:00.334851 UTC. Dec 12 22:51:00.083066 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 22:51:00.221601 ldconfig[1488]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 22:51:00.227591 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 22:51:00.230080 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 22:51:00.255357 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 22:51:00.256767 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 22:51:00.257833 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 22:51:00.259011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 22:51:00.260389 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 22:51:00.261496 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 22:51:00.262821 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 22:51:00.264028 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 22:51:00.265052 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 22:51:00.266231 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 22:51:00.266273 systemd[1]: Reached target paths.target - Path Units. Dec 12 22:51:00.267148 systemd[1]: Reached target timers.target - Timer Units. Dec 12 22:51:00.268820 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 22:51:00.271096 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 22:51:00.273979 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 22:51:00.275355 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 22:51:00.276616 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 22:51:00.282438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 22:51:00.283813 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 22:51:00.285609 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 22:51:00.286660 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 22:51:00.287559 systemd[1]: Reached target basic.target - Basic System. Dec 12 22:51:00.288421 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 22:51:00.288457 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 22:51:00.289440 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 22:51:00.291470 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 22:51:00.293393 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 22:51:00.295495 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 22:51:00.297398 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 22:51:00.298461 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 22:51:00.299441 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 22:51:00.303643 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 22:51:00.304760 jq[1543]: false Dec 12 22:51:00.305422 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 22:51:00.307687 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 22:51:00.311545 extend-filesystems[1544]: Found /dev/vda6 Dec 12 22:51:00.312704 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 22:51:00.313724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 22:51:00.317699 extend-filesystems[1544]: Found /dev/vda9 Dec 12 22:51:00.317699 extend-filesystems[1544]: Checking size of /dev/vda9 Dec 12 22:51:00.314120 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 22:51:00.314642 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 22:51:00.318737 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 22:51:00.328811 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 22:51:00.330497 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 22:51:00.330765 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 22:51:00.331021 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 22:51:00.332490 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 22:51:00.335348 extend-filesystems[1544]: Resized partition /dev/vda9 Dec 12 22:51:00.337180 jq[1561]: true Dec 12 22:51:00.338893 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 22:51:00.339097 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 22:51:00.341071 extend-filesystems[1572]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 22:51:00.348545 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 12 22:51:00.360264 update_engine[1559]: I20251212 22:51:00.359247 1559 main.cc:92] Flatcar Update Engine starting Dec 12 22:51:00.367896 jq[1575]: true Dec 12 22:51:00.369554 tar[1571]: linux-arm64/LICENSE Dec 12 22:51:00.369944 tar[1571]: linux-arm64/helm Dec 12 22:51:00.375622 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 12 22:51:00.386334 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 22:51:00.386334 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 22:51:00.386334 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 12 22:51:00.391773 extend-filesystems[1544]: Resized filesystem in /dev/vda9 Dec 12 22:51:00.391586 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 22:51:00.399632 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 22:51:00.402134 dbus-daemon[1541]: [system] SELinux support is enabled Dec 12 22:51:00.402791 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 22:51:00.405884 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 22:51:00.405919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 22:51:00.407548 update_engine[1559]: I20251212 22:51:00.407232 1559 update_check_scheduler.cc:74] Next update check in 9m42s Dec 12 22:51:00.408699 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 22:51:00.408724 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 22:51:00.410119 systemd[1]: Started update-engine.service - Update Engine. Dec 12 22:51:00.414505 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 22:51:00.433481 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 22:51:00.434659 systemd-logind[1556]: New seat seat0. Dec 12 22:51:00.437406 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 22:51:00.443418 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Dec 12 22:51:00.445861 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 22:51:00.448378 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 22:51:00.498694 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 22:51:00.552735 containerd[1577]: time="2025-12-12T22:51:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 22:51:00.553885 containerd[1577]: time="2025-12-12T22:51:00.553852680Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 22:51:00.570351 containerd[1577]: time="2025-12-12T22:51:00.570235800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.48µs" Dec 12 22:51:00.570351 containerd[1577]: time="2025-12-12T22:51:00.570336040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 22:51:00.570453 containerd[1577]: time="2025-12-12T22:51:00.570431760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 22:51:00.570453 containerd[1577]: time="2025-12-12T22:51:00.570449360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 22:51:00.570769 containerd[1577]: time="2025-12-12T22:51:00.570736480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 22:51:00.570833 containerd[1577]: time="2025-12-12T22:51:00.570767000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 22:51:00.570901 containerd[1577]: time="2025-12-12T22:51:00.570882160Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 22:51:00.570966 containerd[1577]: time="2025-12-12T22:51:00.570950120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 22:51:00.571375 containerd[1577]: time="2025-12-12T22:51:00.571335400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 22:51:00.571375 containerd[1577]: time="2025-12-12T22:51:00.571361400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 22:51:00.571427 containerd[1577]: time="2025-12-12T22:51:00.571375880Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 22:51:00.571446 containerd[1577]: time="2025-12-12T22:51:00.571384520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 22:51:00.571855 containerd[1577]: time="2025-12-12T22:51:00.571769280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 22:51:00.571970 containerd[1577]: time="2025-12-12T22:51:00.571949880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 22:51:00.572275 containerd[1577]: time="2025-12-12T22:51:00.572239920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 22:51:00.572297 containerd[1577]: time="2025-12-12T22:51:00.572281040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 22:51:00.572297 containerd[1577]: time="2025-12-12T22:51:00.572292200Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 22:51:00.572834 containerd[1577]: time="2025-12-12T22:51:00.572810120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 22:51:00.574843 containerd[1577]: time="2025-12-12T22:51:00.574797440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 22:51:00.575036 containerd[1577]: time="2025-12-12T22:51:00.574954120Z" level=info msg="metadata content store policy set" policy=shared Dec 12 22:51:00.579620 containerd[1577]: time="2025-12-12T22:51:00.579580640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 22:51:00.579694 containerd[1577]: time="2025-12-12T22:51:00.579639000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 22:51:00.580076 containerd[1577]: time="2025-12-12T22:51:00.579995400Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 22:51:00.580076 containerd[1577]: time="2025-12-12T22:51:00.580072680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 22:51:00.580145 containerd[1577]: time="2025-12-12T22:51:00.580089640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 22:51:00.580145 containerd[1577]: time="2025-12-12T22:51:00.580106680Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 22:51:00.580145 containerd[1577]: time="2025-12-12T22:51:00.580117080Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 22:51:00.580145 containerd[1577]: time="2025-12-12T22:51:00.580126080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 22:51:00.580145 containerd[1577]: time="2025-12-12T22:51:00.580137840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 22:51:00.580225 containerd[1577]: time="2025-12-12T22:51:00.580195000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 22:51:00.580225 containerd[1577]: time="2025-12-12T22:51:00.580214800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 22:51:00.580258 containerd[1577]: time="2025-12-12T22:51:00.580225800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 22:51:00.580342 containerd[1577]: time="2025-12-12T22:51:00.580281920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 22:51:00.580401 containerd[1577]: time="2025-12-12T22:51:00.580385600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 22:51:00.580696 containerd[1577]: time="2025-12-12T22:51:00.580673800Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 22:51:00.580721 containerd[1577]: time="2025-12-12T22:51:00.580706680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 22:51:00.580787 containerd[1577]: time="2025-12-12T22:51:00.580724840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 22:51:00.580787 containerd[1577]: time="2025-12-12T22:51:00.580737360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 22:51:00.580787 containerd[1577]: time="2025-12-12T22:51:00.580748800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 22:51:00.580844 containerd[1577]: time="2025-12-12T22:51:00.580767400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 22:51:00.580844 containerd[1577]: time="2025-12-12T22:51:00.580831040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 22:51:00.580876 containerd[1577]: time="2025-12-12T22:51:00.580848960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 22:51:00.580876 containerd[1577]: time="2025-12-12T22:51:00.580861480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 22:51:00.580876 containerd[1577]: time="2025-12-12T22:51:00.580874600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 22:51:00.580927 containerd[1577]: time="2025-12-12T22:51:00.580884920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 22:51:00.580927 containerd[1577]: time="2025-12-12T22:51:00.580909800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 22:51:00.581349 containerd[1577]: time="2025-12-12T22:51:00.581320680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 22:51:00.581377 containerd[1577]: time="2025-12-12T22:51:00.581354960Z" level=info msg="Start snapshots syncer" Dec 12 22:51:00.581803 containerd[1577]: time="2025-12-12T22:51:00.581778840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 22:51:00.582779 containerd[1577]: time="2025-12-12T22:51:00.582670520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 22:51:00.582879 containerd[1577]: time="2025-12-12T22:51:00.582785000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 22:51:00.582902 containerd[1577]: time="2025-12-12T22:51:00.582885480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 22:51:00.583103 containerd[1577]: time="2025-12-12T22:51:00.583081680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 22:51:00.583536 containerd[1577]: time="2025-12-12T22:51:00.583495160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 22:51:00.583563 containerd[1577]: time="2025-12-12T22:51:00.583537760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 22:51:00.583563 containerd[1577]: time="2025-12-12T22:51:00.583556840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 22:51:00.583605 containerd[1577]: time="2025-12-12T22:51:00.583571240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 22:51:00.583605 containerd[1577]: time="2025-12-12T22:51:00.583582040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 22:51:00.583605 containerd[1577]: time="2025-12-12T22:51:00.583592520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 22:51:00.583682 containerd[1577]: time="2025-12-12T22:51:00.583655840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 22:51:00.583743 containerd[1577]: time="2025-12-12T22:51:00.583729000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 22:51:00.584015 containerd[1577]: time="2025-12-12T22:51:00.583992120Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 22:51:00.584250 containerd[1577]: time="2025-12-12T22:51:00.584023000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 22:51:00.584275 containerd[1577]: time="2025-12-12T22:51:00.584252800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 22:51:00.584293 containerd[1577]: time="2025-12-12T22:51:00.584275400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 22:51:00.584293 containerd[1577]: time="2025-12-12T22:51:00.584283960Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 22:51:00.584326 containerd[1577]: time="2025-12-12T22:51:00.584299720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 22:51:00.584326 containerd[1577]: time="2025-12-12T22:51:00.584310760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 22:51:00.584395 containerd[1577]: time="2025-12-12T22:51:00.584384760Z" level=info msg="runtime interface created" Dec 12 22:51:00.584395 containerd[1577]: time="2025-12-12T22:51:00.584393800Z" level=info msg="created NRI interface" Dec 12 22:51:00.584434 containerd[1577]: time="2025-12-12T22:51:00.584404280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 22:51:00.584434 containerd[1577]: time="2025-12-12T22:51:00.584416280Z" level=info msg="Connect containerd service" Dec 12 22:51:00.584467 containerd[1577]: time="2025-12-12T22:51:00.584445680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 22:51:00.587191 containerd[1577]: time="2025-12-12T22:51:00.587153240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 22:51:00.673994 containerd[1577]: time="2025-12-12T22:51:00.673936240Z" level=info msg="Start subscribing containerd event" Dec 12 22:51:00.674369 containerd[1577]: time="2025-12-12T22:51:00.674222680Z" level=info msg="Start recovering state" Dec 12 22:51:00.674661 containerd[1577]: time="2025-12-12T22:51:00.674638600Z" level=info msg="Start event monitor" Dec 12 22:51:00.674761 containerd[1577]: time="2025-12-12T22:51:00.674746640Z" level=info msg="Start cni network conf syncer for default" Dec 12 22:51:00.674811 containerd[1577]: time="2025-12-12T22:51:00.674799560Z" level=info msg="Start streaming server" Dec 12 22:51:00.675067 containerd[1577]: time="2025-12-12T22:51:00.674898000Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 22:51:00.675067 containerd[1577]: time="2025-12-12T22:51:00.674912360Z" level=info msg="runtime interface starting up..." Dec 12 22:51:00.675067 containerd[1577]: time="2025-12-12T22:51:00.674919160Z" level=info msg="starting plugins..." Dec 12 22:51:00.675067 containerd[1577]: time="2025-12-12T22:51:00.674938800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 22:51:00.675275 containerd[1577]: time="2025-12-12T22:51:00.675232840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 22:51:00.675320 containerd[1577]: time="2025-12-12T22:51:00.675301680Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 22:51:00.676159 containerd[1577]: time="2025-12-12T22:51:00.676133160Z" level=info msg="containerd successfully booted in 0.123876s" Dec 12 22:51:00.676308 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 22:51:00.680141 tar[1571]: linux-arm64/README.md Dec 12 22:51:00.694475 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 22:51:01.080697 systemd-networkd[1292]: eth0: Gained IPv6LL Dec 12 22:51:01.082618 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 22:51:01.085039 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 22:51:01.087714 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 22:51:01.090220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 22:51:01.093831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 22:51:01.120886 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 22:51:01.122947 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 22:51:01.124805 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 22:51:01.126861 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 22:51:01.204915 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 22:51:01.226077 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 22:51:01.229063 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 22:51:01.249586 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 22:51:01.249850 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 22:51:01.252717 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 22:51:01.270246 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 22:51:01.273636 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 22:51:01.276902 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 22:51:01.278319 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 22:51:01.659086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:01.660704 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 22:51:01.661980 systemd[1]: Startup finished in 1.506s (kernel) + 4.905s (initrd) + 3.736s (userspace) = 10.148s. Dec 12 22:51:01.664161 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 22:51:02.016819 kubelet[1680]: E1212 22:51:02.016774 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 22:51:02.019135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 22:51:02.019270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 22:51:02.019615 systemd[1]: kubelet.service: Consumed 748ms CPU time, 256.5M memory peak. Dec 12 22:51:04.235624 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 22:51:04.236804 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:34506.service - OpenSSH per-connection server daemon (10.0.0.1:34506). Dec 12 22:51:04.311895 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 34506 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:04.313857 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:04.320697 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 22:51:04.321696 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 22:51:04.327385 systemd-logind[1556]: New session 1 of user core. Dec 12 22:51:04.345066 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 22:51:04.348378 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 22:51:04.363571 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:04.366393 systemd-logind[1556]: New session 2 of user core. Dec 12 22:51:04.500652 systemd[1699]: Queued start job for default target default.target. Dec 12 22:51:04.524666 systemd[1699]: Created slice app.slice - User Application Slice. Dec 12 22:51:04.524703 systemd[1699]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 22:51:04.524717 systemd[1699]: Reached target paths.target - Paths. Dec 12 22:51:04.524776 systemd[1699]: Reached target timers.target - Timers. Dec 12 22:51:04.526320 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 22:51:04.527146 systemd[1699]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 22:51:04.537870 systemd[1699]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 22:51:04.554111 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 22:51:04.554232 systemd[1699]: Reached target sockets.target - Sockets. Dec 12 22:51:04.554277 systemd[1699]: Reached target basic.target - Basic System. Dec 12 22:51:04.554306 systemd[1699]: Reached target default.target - Main User Target. Dec 12 22:51:04.554334 systemd[1699]: Startup finished in 181ms. Dec 12 22:51:04.554611 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 22:51:04.556244 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 22:51:04.570466 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:34522.service - OpenSSH per-connection server daemon (10.0.0.1:34522). Dec 12 22:51:04.627611 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 34522 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:04.629000 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:04.633985 systemd-logind[1556]: New session 3 of user core. Dec 12 22:51:04.642760 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 22:51:04.654316 sshd[1717]: Connection closed by 10.0.0.1 port 34522 Dec 12 22:51:04.654607 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Dec 12 22:51:04.668538 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:34522.service: Deactivated successfully. Dec 12 22:51:04.672599 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 22:51:04.673701 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Dec 12 22:51:04.676737 systemd-logind[1556]: Removed session 3. Dec 12 22:51:04.677309 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:34530.service - OpenSSH per-connection server daemon (10.0.0.1:34530). Dec 12 22:51:04.740329 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 34530 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:04.741742 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:04.746096 systemd-logind[1556]: New session 4 of user core. Dec 12 22:51:04.760755 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 22:51:04.768050 sshd[1727]: Connection closed by 10.0.0.1 port 34530 Dec 12 22:51:04.768477 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Dec 12 22:51:04.781134 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:34530.service: Deactivated successfully. Dec 12 22:51:04.782855 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 22:51:04.783596 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Dec 12 22:51:04.786103 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:34532.service - OpenSSH per-connection server daemon (10.0.0.1:34532). Dec 12 22:51:04.786817 systemd-logind[1556]: Removed session 4. Dec 12 22:51:04.853285 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 34532 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:04.854794 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:04.859724 systemd-logind[1556]: New session 5 of user core. Dec 12 22:51:04.871767 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 22:51:04.884450 sshd[1737]: Connection closed by 10.0.0.1 port 34532 Dec 12 22:51:04.884886 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 12 22:51:04.898742 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:34532.service: Deactivated successfully. Dec 12 22:51:04.900518 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 22:51:04.901420 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Dec 12 22:51:04.904005 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:34536.service - OpenSSH per-connection server daemon (10.0.0.1:34536). Dec 12 22:51:04.904673 systemd-logind[1556]: Removed session 5. Dec 12 22:51:04.964958 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 34536 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:04.966413 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:04.971392 systemd-logind[1556]: New session 6 of user core. Dec 12 22:51:04.982773 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 22:51:05.002057 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 22:51:05.002347 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 22:51:05.019576 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 22:51:05.021603 sshd[1747]: Connection closed by 10.0.0.1 port 34536 Dec 12 22:51:05.021864 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Dec 12 22:51:05.044938 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:34536.service: Deactivated successfully. Dec 12 22:51:05.048148 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 22:51:05.049102 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Dec 12 22:51:05.051417 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). Dec 12 22:51:05.052864 systemd-logind[1556]: Removed session 6. Dec 12 22:51:05.114835 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:05.116283 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:05.121323 systemd-logind[1556]: New session 7 of user core. Dec 12 22:51:05.132793 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 22:51:05.145297 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 22:51:05.145577 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 22:51:05.148965 sudo[1761]: pam_unix(sudo:session): session closed for user root Dec 12 22:51:05.155924 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 22:51:05.156193 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 22:51:05.163470 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 22:51:05.205000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 22:51:05.205981 augenrules[1785]: No rules Dec 12 22:51:05.206583 kernel: kauditd_printk_skb: 198 callbacks suppressed Dec 12 22:51:05.206621 kernel: audit: type=1305 audit(1765579865.205:241): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 22:51:05.206917 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 22:51:05.207161 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 22:51:05.205000 audit[1785]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf544b10 a2=420 a3=0 items=0 ppid=1766 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:05.209055 sudo[1760]: pam_unix(sudo:session): session closed for user root Dec 12 22:51:05.212129 kernel: audit: type=1300 audit(1765579865.205:241): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf544b10 a2=420 a3=0 items=0 ppid=1766 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:05.212216 kernel: audit: type=1327 audit(1765579865.205:241): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 22:51:05.205000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 22:51:05.214014 sshd[1759]: Connection closed by 10.0.0.1 port 34538 Dec 12 22:51:05.214249 kernel: audit: type=1130 audit(1765579865.206:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.213069 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Dec 12 22:51:05.217063 kernel: audit: type=1131 audit(1765579865.206:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.207000 audit[1760]: USER_END pid=1760 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.222075 kernel: audit: type=1106 audit(1765579865.207:244): pid=1760 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.222138 kernel: audit: type=1104 audit(1765579865.207:245): pid=1760 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.207000 audit[1760]: CRED_DISP pid=1760 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.213000 audit[1755]: USER_END pid=1755 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.228617 kernel: audit: type=1106 audit(1765579865.213:246): pid=1755 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.228654 kernel: audit: type=1104 audit(1765579865.213:247): pid=1755 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.213000 audit[1755]: CRED_DISP pid=1755 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.235939 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:34538.service: Deactivated successfully. Dec 12 22:51:05.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.28:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.237663 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 22:51:05.239606 kernel: audit: type=1131 audit(1765579865.235:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.28:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.239851 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Dec 12 22:51:05.241624 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:34552.service - OpenSSH per-connection server daemon (10.0.0.1:34552). Dec 12 22:51:05.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.28:22-10.0.0.1:34552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.242600 systemd-logind[1556]: Removed session 7. Dec 12 22:51:05.306000 audit[1794]: USER_ACCT pid=1794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.308442 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 34552 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:51:05.308000 audit[1794]: CRED_ACQ pid=1794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.308000 audit[1794]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca2cd250 a2=3 a3=0 items=0 ppid=1 pid=1794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:05.308000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:51:05.309196 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:51:05.313291 systemd-logind[1556]: New session 8 of user core. Dec 12 22:51:05.322765 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 22:51:05.323000 audit[1794]: USER_START pid=1794 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.324000 audit[1798]: CRED_ACQ pid=1798 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:05.333000 audit[1799]: USER_ACCT pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.335236 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 22:51:05.333000 audit[1799]: CRED_REFR pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.333000 audit[1799]: USER_START pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:05.335510 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 22:51:05.625470 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 22:51:05.638821 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 22:51:05.898249 dockerd[1821]: time="2025-12-12T22:51:05.898118363Z" level=info msg="Starting up" Dec 12 22:51:05.902143 dockerd[1821]: time="2025-12-12T22:51:05.901976119Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 22:51:05.913254 dockerd[1821]: time="2025-12-12T22:51:05.913208256Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 22:51:06.122242 dockerd[1821]: time="2025-12-12T22:51:06.122197313Z" level=info msg="Loading containers: start." Dec 12 22:51:06.131567 kernel: Initializing XFRM netlink socket Dec 12 22:51:06.173000 audit[1876]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.173000 audit[1876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc5fb57c0 a2=0 a3=0 items=0 ppid=1821 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 22:51:06.175000 audit[1878]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.175000 audit[1878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd5325be0 a2=0 a3=0 items=0 ppid=1821 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 22:51:06.177000 audit[1880]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.177000 audit[1880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9016080 a2=0 a3=0 items=0 ppid=1821 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 22:51:06.179000 audit[1882]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.179000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb699b80 a2=0 a3=0 items=0 ppid=1821 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 22:51:06.182000 audit[1884]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.182000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcbfff0c0 a2=0 a3=0 items=0 ppid=1821 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 22:51:06.183000 audit[1886]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.183000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc3097d70 a2=0 a3=0 items=0 ppid=1821 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 22:51:06.185000 audit[1888]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.185000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc1306ce0 a2=0 a3=0 items=0 ppid=1821 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 22:51:06.187000 audit[1890]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.187000 audit[1890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcaccb040 a2=0 a3=0 items=0 ppid=1821 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 22:51:06.224000 audit[1893]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.224000 audit[1893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc41eae70 a2=0 a3=0 items=0 ppid=1821 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 22:51:06.226000 audit[1895]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.226000 audit[1895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffee28d6c0 a2=0 a3=0 items=0 ppid=1821 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 22:51:06.228000 audit[1897]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.228000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe507df40 a2=0 a3=0 items=0 ppid=1821 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 22:51:06.230000 audit[1899]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.230000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcf148330 a2=0 a3=0 items=0 ppid=1821 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 22:51:06.232000 audit[1901]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.232000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe456bb30 a2=0 a3=0 items=0 ppid=1821 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 22:51:06.267000 audit[1931]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.267000 audit[1931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff40e6590 a2=0 a3=0 items=0 ppid=1821 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 22:51:06.269000 audit[1933]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.269000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcea71870 a2=0 a3=0 items=0 ppid=1821 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 22:51:06.271000 audit[1935]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.271000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3120380 a2=0 a3=0 items=0 ppid=1821 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 22:51:06.273000 audit[1937]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.273000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd20a2850 a2=0 a3=0 items=0 ppid=1821 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.273000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 22:51:06.274000 audit[1939]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.274000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdde42f70 a2=0 a3=0 items=0 ppid=1821 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 22:51:06.275000 audit[1941]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.275000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdddf1600 a2=0 a3=0 items=0 ppid=1821 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 22:51:06.277000 audit[1943]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.277000 audit[1943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe1f54430 a2=0 a3=0 items=0 ppid=1821 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 22:51:06.280000 audit[1945]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.280000 audit[1945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc4da8340 a2=0 a3=0 items=0 ppid=1821 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 22:51:06.283000 audit[1947]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.283000 audit[1947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffffe247e00 a2=0 a3=0 items=0 ppid=1821 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 22:51:06.285000 audit[1949]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.285000 audit[1949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe874f960 a2=0 a3=0 items=0 ppid=1821 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 22:51:06.287000 audit[1951]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.287000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff4d21ad0 a2=0 a3=0 items=0 ppid=1821 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 22:51:06.289000 audit[1953]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.289000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffffbca8390 a2=0 a3=0 items=0 ppid=1821 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 22:51:06.292000 audit[1955]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.292000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffff4df130 a2=0 a3=0 items=0 ppid=1821 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 22:51:06.299000 audit[1960]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.299000 audit[1960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7887250 a2=0 a3=0 items=0 ppid=1821 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 22:51:06.301000 audit[1962]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.301000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcd59f500 a2=0 a3=0 items=0 ppid=1821 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 22:51:06.303000 audit[1964]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.303000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc2889bd0 a2=0 a3=0 items=0 ppid=1821 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 22:51:06.307000 audit[1966]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.307000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb72dcb0 a2=0 a3=0 items=0 ppid=1821 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 22:51:06.309000 audit[1968]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.309000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd91f0940 a2=0 a3=0 items=0 ppid=1821 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 22:51:06.311000 audit[1970]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:06.311000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe7e74c70 a2=0 a3=0 items=0 ppid=1821 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 22:51:06.328000 audit[1975]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.328000 audit[1975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc2acf1c0 a2=0 a3=0 items=0 ppid=1821 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 22:51:06.330000 audit[1977]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.330000 audit[1977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd55ce450 a2=0 a3=0 items=0 ppid=1821 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 22:51:06.338000 audit[1985]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.338000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd4189f20 a2=0 a3=0 items=0 ppid=1821 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 22:51:06.345000 audit[1991]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.345000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe9ac7410 a2=0 a3=0 items=0 ppid=1821 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 22:51:06.348000 audit[1993]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.348000 audit[1993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc7619fd0 a2=0 a3=0 items=0 ppid=1821 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 22:51:06.351000 audit[1995]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.351000 audit[1995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd9146720 a2=0 a3=0 items=0 ppid=1821 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 22:51:06.353000 audit[1997]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.353000 audit[1997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffec9d30c0 a2=0 a3=0 items=0 ppid=1821 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 22:51:06.355000 audit[1999]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:06.355000 audit[1999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe05dcb50 a2=0 a3=0 items=0 ppid=1821 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:06.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 22:51:06.357452 systemd-networkd[1292]: docker0: Link UP Dec 12 22:51:06.362575 dockerd[1821]: time="2025-12-12T22:51:06.362474594Z" level=info msg="Loading containers: done." Dec 12 22:51:06.381375 dockerd[1821]: time="2025-12-12T22:51:06.381182374Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 22:51:06.381375 dockerd[1821]: time="2025-12-12T22:51:06.381263884Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 22:51:06.382241 dockerd[1821]: time="2025-12-12T22:51:06.382201427Z" level=info msg="Initializing buildkit" Dec 12 22:51:06.409140 dockerd[1821]: time="2025-12-12T22:51:06.409093952Z" level=info msg="Completed buildkit initialization" Dec 12 22:51:06.414706 dockerd[1821]: time="2025-12-12T22:51:06.414645388Z" level=info msg="Daemon has completed initialization" Dec 12 22:51:06.414833 dockerd[1821]: time="2025-12-12T22:51:06.414722100Z" level=info msg="API listen on /run/docker.sock" Dec 12 22:51:06.415025 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 22:51:06.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:06.983779 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3683903171-merged.mount: Deactivated successfully. Dec 12 22:51:06.992864 containerd[1577]: time="2025-12-12T22:51:06.992598055Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 22:51:07.479295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295050888.mount: Deactivated successfully. Dec 12 22:51:08.471871 containerd[1577]: time="2025-12-12T22:51:08.471818866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:08.472651 containerd[1577]: time="2025-12-12T22:51:08.472301465Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835766" Dec 12 22:51:08.473338 containerd[1577]: time="2025-12-12T22:51:08.473306354Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:08.476559 containerd[1577]: time="2025-12-12T22:51:08.476509303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:08.477528 containerd[1577]: time="2025-12-12T22:51:08.477483978Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.484834374s" Dec 12 22:51:08.477528 containerd[1577]: time="2025-12-12T22:51:08.477521198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 22:51:08.478207 containerd[1577]: time="2025-12-12T22:51:08.478176003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 22:51:10.023796 containerd[1577]: time="2025-12-12T22:51:10.023743307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:10.025138 containerd[1577]: time="2025-12-12T22:51:10.025103160Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 12 22:51:10.026121 containerd[1577]: time="2025-12-12T22:51:10.026076186Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:10.028820 containerd[1577]: time="2025-12-12T22:51:10.028794519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:10.030517 containerd[1577]: time="2025-12-12T22:51:10.030473882Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.552266879s" Dec 12 22:51:10.030517 containerd[1577]: time="2025-12-12T22:51:10.030512326Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 22:51:10.031052 containerd[1577]: time="2025-12-12T22:51:10.031028379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 22:51:11.018567 containerd[1577]: time="2025-12-12T22:51:11.018391776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:11.019330 containerd[1577]: time="2025-12-12T22:51:11.019049197Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 12 22:51:11.020017 containerd[1577]: time="2025-12-12T22:51:11.019989608Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:11.023813 containerd[1577]: time="2025-12-12T22:51:11.023760166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:11.024736 containerd[1577]: time="2025-12-12T22:51:11.024316467Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 993.256679ms" Dec 12 22:51:11.024736 containerd[1577]: time="2025-12-12T22:51:11.024342483Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 22:51:11.024926 containerd[1577]: time="2025-12-12T22:51:11.024865628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 22:51:11.954456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360659902.mount: Deactivated successfully. Dec 12 22:51:12.269892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 22:51:12.271591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 22:51:12.280924 containerd[1577]: time="2025-12-12T22:51:12.280865212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:12.281656 containerd[1577]: time="2025-12-12T22:51:12.281601327Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=9841274" Dec 12 22:51:12.282290 containerd[1577]: time="2025-12-12T22:51:12.282239087Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:12.284380 containerd[1577]: time="2025-12-12T22:51:12.284343598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:12.284865 containerd[1577]: time="2025-12-12T22:51:12.284829896Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.259923418s" Dec 12 22:51:12.284865 containerd[1577]: time="2025-12-12T22:51:12.284862654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 22:51:12.285562 containerd[1577]: time="2025-12-12T22:51:12.285500051Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 22:51:12.411295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:12.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:12.414810 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 22:51:12.414884 kernel: audit: type=1130 audit(1765579872.410:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:12.415518 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 22:51:12.464637 kubelet[2124]: E1212 22:51:12.464564 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 22:51:12.467808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 22:51:12.467945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 22:51:12.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 22:51:12.468349 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.3M memory peak. Dec 12 22:51:12.471564 kernel: audit: type=1131 audit(1765579872.466:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 22:51:12.896784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355727097.mount: Deactivated successfully. Dec 12 22:51:13.610498 containerd[1577]: time="2025-12-12T22:51:13.610409564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:13.611280 containerd[1577]: time="2025-12-12T22:51:13.611212882Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Dec 12 22:51:13.611969 containerd[1577]: time="2025-12-12T22:51:13.611934762Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:13.615074 containerd[1577]: time="2025-12-12T22:51:13.615031855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:13.616620 containerd[1577]: time="2025-12-12T22:51:13.616587366Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.331058473s" Dec 12 22:51:13.616620 containerd[1577]: time="2025-12-12T22:51:13.616617841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 22:51:13.617170 containerd[1577]: time="2025-12-12T22:51:13.617142181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 22:51:14.102711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196732481.mount: Deactivated successfully. Dec 12 22:51:14.107220 containerd[1577]: time="2025-12-12T22:51:14.107141259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 22:51:14.107858 containerd[1577]: time="2025-12-12T22:51:14.107803192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 22:51:14.108674 containerd[1577]: time="2025-12-12T22:51:14.108643963Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 22:51:14.110474 containerd[1577]: time="2025-12-12T22:51:14.110443404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 22:51:14.112002 containerd[1577]: time="2025-12-12T22:51:14.111707698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 494.534607ms" Dec 12 22:51:14.112002 containerd[1577]: time="2025-12-12T22:51:14.111739515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 22:51:14.112237 containerd[1577]: time="2025-12-12T22:51:14.112104834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 22:51:14.673996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601696470.mount: Deactivated successfully. Dec 12 22:51:16.199553 containerd[1577]: time="2025-12-12T22:51:16.198710441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:16.199553 containerd[1577]: time="2025-12-12T22:51:16.199406459Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Dec 12 22:51:16.200611 containerd[1577]: time="2025-12-12T22:51:16.200581846Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:16.204107 containerd[1577]: time="2025-12-12T22:51:16.204059924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:16.205355 containerd[1577]: time="2025-12-12T22:51:16.205314490Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.093175841s" Dec 12 22:51:16.205355 containerd[1577]: time="2025-12-12T22:51:16.205354702Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 22:51:21.090949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:21.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:21.091503 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.3M memory peak. Dec 12 22:51:21.093647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 22:51:21.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:21.096590 kernel: audit: type=1130 audit(1765579881.090:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:21.096666 kernel: audit: type=1131 audit(1765579881.090:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:21.118431 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-8.scope)... Dec 12 22:51:21.118450 systemd[1]: Reloading... Dec 12 22:51:21.185556 zram_generator::config[2321]: No configuration found. Dec 12 22:51:21.430473 systemd[1]: Reloading finished in 311 ms. Dec 12 22:51:21.445000 audit: BPF prog-id=67 op=LOAD Dec 12 22:51:21.445000 audit: BPF prog-id=54 op=UNLOAD Dec 12 22:51:21.447782 kernel: audit: type=1334 audit(1765579881.445:303): prog-id=67 op=LOAD Dec 12 22:51:21.447823 kernel: audit: type=1334 audit(1765579881.445:304): prog-id=54 op=UNLOAD Dec 12 22:51:21.447840 kernel: audit: type=1334 audit(1765579881.446:305): prog-id=68 op=LOAD Dec 12 22:51:21.447859 kernel: audit: type=1334 audit(1765579881.447:306): prog-id=69 op=LOAD Dec 12 22:51:21.447886 kernel: audit: type=1334 audit(1765579881.447:307): prog-id=55 op=UNLOAD Dec 12 22:51:21.446000 audit: BPF prog-id=68 op=LOAD Dec 12 22:51:21.447000 audit: BPF prog-id=69 op=LOAD Dec 12 22:51:21.447000 audit: BPF prog-id=55 op=UNLOAD Dec 12 22:51:21.447000 audit: BPF prog-id=56 op=UNLOAD Dec 12 22:51:21.450756 kernel: audit: type=1334 audit(1765579881.447:308): prog-id=56 op=UNLOAD Dec 12 22:51:21.450794 kernel: audit: type=1334 audit(1765579881.449:309): prog-id=70 op=LOAD Dec 12 22:51:21.449000 audit: BPF prog-id=70 op=LOAD Dec 12 22:51:21.449000 audit: BPF prog-id=63 op=UNLOAD Dec 12 22:51:21.452200 kernel: audit: type=1334 audit(1765579881.449:310): prog-id=63 op=UNLOAD Dec 12 22:51:21.449000 audit: BPF prog-id=71 op=LOAD Dec 12 22:51:21.464000 audit: BPF prog-id=57 op=UNLOAD Dec 12 22:51:21.464000 audit: BPF prog-id=72 op=LOAD Dec 12 22:51:21.464000 audit: BPF prog-id=73 op=LOAD Dec 12 22:51:21.464000 audit: BPF prog-id=58 op=UNLOAD Dec 12 22:51:21.464000 audit: BPF prog-id=59 op=UNLOAD Dec 12 22:51:21.465000 audit: BPF prog-id=74 op=LOAD Dec 12 22:51:21.465000 audit: BPF prog-id=64 op=UNLOAD Dec 12 22:51:21.465000 audit: BPF prog-id=75 op=LOAD Dec 12 22:51:21.465000 audit: BPF prog-id=76 op=LOAD Dec 12 22:51:21.465000 audit: BPF prog-id=65 op=UNLOAD Dec 12 22:51:21.465000 audit: BPF prog-id=66 op=UNLOAD Dec 12 22:51:21.466000 audit: BPF prog-id=77 op=LOAD Dec 12 22:51:21.466000 audit: BPF prog-id=60 op=UNLOAD Dec 12 22:51:21.466000 audit: BPF prog-id=78 op=LOAD Dec 12 22:51:21.466000 audit: BPF prog-id=51 op=UNLOAD Dec 12 22:51:21.466000 audit: BPF prog-id=79 op=LOAD Dec 12 22:51:21.466000 audit: BPF prog-id=80 op=LOAD Dec 12 22:51:21.466000 audit: BPF prog-id=52 op=UNLOAD Dec 12 22:51:21.466000 audit: BPF prog-id=53 op=UNLOAD Dec 12 22:51:21.467000 audit: BPF prog-id=81 op=LOAD Dec 12 22:51:21.467000 audit: BPF prog-id=50 op=UNLOAD Dec 12 22:51:21.468000 audit: BPF prog-id=82 op=LOAD Dec 12 22:51:21.468000 audit: BPF prog-id=83 op=LOAD Dec 12 22:51:21.468000 audit: BPF prog-id=61 op=UNLOAD Dec 12 22:51:21.468000 audit: BPF prog-id=62 op=UNLOAD Dec 12 22:51:21.468000 audit: BPF prog-id=84 op=LOAD Dec 12 22:51:21.468000 audit: BPF prog-id=47 op=UNLOAD Dec 12 22:51:21.468000 audit: BPF prog-id=85 op=LOAD Dec 12 22:51:21.468000 audit: BPF prog-id=86 op=LOAD Dec 12 22:51:21.468000 audit: BPF prog-id=48 op=UNLOAD Dec 12 22:51:21.468000 audit: BPF prog-id=49 op=UNLOAD Dec 12 22:51:21.483032 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 22:51:21.483114 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 22:51:21.484571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:21.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 22:51:21.484632 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.2M memory peak. Dec 12 22:51:21.486251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 22:51:21.613159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:21.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:21.624832 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 22:51:21.657393 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 22:51:21.657393 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 22:51:21.657393 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 22:51:21.657754 kubelet[2363]: I1212 22:51:21.657448 2363 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 22:51:22.128009 kubelet[2363]: I1212 22:51:22.127970 2363 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 22:51:22.128219 kubelet[2363]: I1212 22:51:22.128208 2363 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 22:51:22.128567 kubelet[2363]: I1212 22:51:22.128550 2363 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 22:51:22.151839 kubelet[2363]: E1212 22:51:22.151792 2363 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Dec 12 22:51:22.153208 kubelet[2363]: I1212 22:51:22.153144 2363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 22:51:22.158852 kubelet[2363]: I1212 22:51:22.158826 2363 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 22:51:22.161774 kubelet[2363]: I1212 22:51:22.161755 2363 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 22:51:22.161999 kubelet[2363]: I1212 22:51:22.161973 2363 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 22:51:22.162167 kubelet[2363]: I1212 22:51:22.162000 2363 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 22:51:22.162264 kubelet[2363]: I1212 22:51:22.162239 2363 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 22:51:22.162264 kubelet[2363]: I1212 22:51:22.162250 2363 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 22:51:22.162454 kubelet[2363]: I1212 22:51:22.162440 2363 state_mem.go:36] "Initialized new in-memory state store" Dec 12 22:51:22.164905 kubelet[2363]: I1212 22:51:22.164869 2363 kubelet.go:446] "Attempting to sync node with API server" Dec 12 22:51:22.164905 kubelet[2363]: I1212 22:51:22.164897 2363 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 22:51:22.164998 kubelet[2363]: I1212 22:51:22.164920 2363 kubelet.go:352] "Adding apiserver pod source" Dec 12 22:51:22.164998 kubelet[2363]: I1212 22:51:22.164935 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 22:51:22.167236 kubelet[2363]: I1212 22:51:22.167213 2363 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 22:51:22.167303 kubelet[2363]: W1212 22:51:22.167220 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 12 22:51:22.167303 kubelet[2363]: E1212 22:51:22.167275 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Dec 12 22:51:22.168144 kubelet[2363]: W1212 22:51:22.168107 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 12 22:51:22.168205 kubelet[2363]: E1212 22:51:22.168159 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Dec 12 22:51:22.168430 kubelet[2363]: I1212 22:51:22.168410 2363 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 22:51:22.168563 kubelet[2363]: W1212 22:51:22.168552 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 22:51:22.169477 kubelet[2363]: I1212 22:51:22.169442 2363 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 22:51:22.169477 kubelet[2363]: I1212 22:51:22.169477 2363 server.go:1287] "Started kubelet" Dec 12 22:51:22.169773 kubelet[2363]: I1212 22:51:22.169726 2363 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 22:51:22.174679 kubelet[2363]: I1212 22:51:22.174103 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 22:51:22.174679 kubelet[2363]: I1212 22:51:22.174520 2363 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 22:51:22.175365 kubelet[2363]: I1212 22:51:22.175340 2363 server.go:479] "Adding debug handlers to kubelet server" Dec 12 22:51:22.175512 kubelet[2363]: E1212 22:51:22.175244 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880998c63495f9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 22:51:22.169458588 +0000 UTC m=+0.541622610,LastTimestamp:2025-12-12 22:51:22.169458588 +0000 UTC m=+0.541622610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 22:51:22.176348 kubelet[2363]: I1212 22:51:22.176317 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 22:51:22.177381 kubelet[2363]: I1212 22:51:22.177298 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 22:51:22.178390 kubelet[2363]: E1212 22:51:22.178367 2363 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 22:51:22.178498 kubelet[2363]: E1212 22:51:22.178414 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 22:51:22.178582 kubelet[2363]: I1212 22:51:22.178438 2363 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 22:51:22.178769 kubelet[2363]: I1212 22:51:22.178449 2363 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 22:51:22.178880 kubelet[2363]: I1212 22:51:22.178868 2363 reconciler.go:26] "Reconciler: start to sync state" Dec 12 22:51:22.179045 kubelet[2363]: W1212 22:51:22.178986 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 12 22:51:22.179045 kubelet[2363]: E1212 22:51:22.179036 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Dec 12 22:51:22.180019 kubelet[2363]: E1212 22:51:22.179989 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Dec 12 22:51:22.180110 kubelet[2363]: I1212 22:51:22.180003 2363 factory.go:221] Registration of the systemd container factory successfully Dec 12 22:51:22.180648 kubelet[2363]: I1212 22:51:22.180503 2363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 22:51:22.179000 audit[2376]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.179000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc9cb0cd0 a2=0 a3=0 items=0 ppid=2363 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 22:51:22.181720 kubelet[2363]: I1212 22:51:22.181645 2363 factory.go:221] Registration of the containerd container factory successfully Dec 12 22:51:22.183000 audit[2377]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.183000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeff21280 a2=0 a3=0 items=0 ppid=2363 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 22:51:22.184000 audit[2379]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.184000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff6a39d50 a2=0 a3=0 items=0 ppid=2363 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 22:51:22.186000 audit[2381]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.186000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff3d9abe0 a2=0 a3=0 items=0 ppid=2363 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 22:51:22.194058 kubelet[2363]: I1212 22:51:22.194038 2363 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 22:51:22.194058 kubelet[2363]: I1212 22:51:22.194052 2363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 22:51:22.194169 kubelet[2363]: I1212 22:51:22.194070 2363 state_mem.go:36] "Initialized new in-memory state store" Dec 12 22:51:22.193000 audit[2387]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.193000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdf540660 a2=0 a3=0 items=0 ppid=2363 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 22:51:22.194000 audit[2388]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:22.194000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd38e15d0 a2=0 a3=0 items=0 ppid=2363 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 22:51:22.196000 audit[2389]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.196000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd26d6cb0 a2=0 a3=0 items=0 ppid=2363 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 22:51:22.196000 audit[2391]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:22.196000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff20ae3d0 a2=0 a3=0 items=0 ppid=2363 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 22:51:22.197000 audit[2395]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.197000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7cacfb0 a2=0 a3=0 items=0 ppid=2363 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 22:51:22.198000 audit[2396]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:22.198000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe409150 a2=0 a3=0 items=0 ppid=2363 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 22:51:22.199000 audit[2397]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:22.199000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb114360 a2=0 a3=0 items=0 ppid=2363 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 22:51:22.200000 audit[2398]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:22.200000 audit[2398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8119b00 a2=0 a3=0 items=0 ppid=2363 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.195627 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.196665 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.196683 2363 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.196701 2363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.196708 2363 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 22:51:22.277521 kubelet[2363]: E1212 22:51:22.196744 2363 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.276770 2363 policy_none.go:49] "None policy: Start" Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.276796 2363 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 22:51:22.277521 kubelet[2363]: I1212 22:51:22.276808 2363 state_mem.go:35] "Initializing new in-memory state store" Dec 12 22:51:22.277521 kubelet[2363]: W1212 22:51:22.277050 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 12 22:51:22.277521 kubelet[2363]: E1212 22:51:22.277106 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Dec 12 22:51:22.278608 kubelet[2363]: E1212 22:51:22.278587 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 22:51:22.282607 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 22:51:22.295578 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 22:51:22.297630 kubelet[2363]: E1212 22:51:22.297604 2363 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 22:51:22.312408 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 22:51:22.313632 kubelet[2363]: I1212 22:51:22.313607 2363 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 22:51:22.313827 kubelet[2363]: I1212 22:51:22.313810 2363 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 22:51:22.313862 kubelet[2363]: I1212 22:51:22.313833 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 22:51:22.314185 kubelet[2363]: I1212 22:51:22.314163 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 22:51:22.315312 kubelet[2363]: E1212 22:51:22.315291 2363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 22:51:22.315363 kubelet[2363]: E1212 22:51:22.315330 2363 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 22:51:22.381734 kubelet[2363]: E1212 22:51:22.381614 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Dec 12 22:51:22.415687 kubelet[2363]: I1212 22:51:22.415644 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 22:51:22.416057 kubelet[2363]: E1212 22:51:22.416035 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 12 22:51:22.508900 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 12 22:51:22.541268 kubelet[2363]: E1212 22:51:22.541236 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:22.544287 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 12 22:51:22.566757 kubelet[2363]: E1212 22:51:22.566731 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:22.569913 systemd[1]: Created slice kubepods-burstable-pod7f539930a6dfcc0903d352861378db14.slice - libcontainer container kubepods-burstable-pod7f539930a6dfcc0903d352861378db14.slice. Dec 12 22:51:22.572529 kubelet[2363]: E1212 22:51:22.572493 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:22.580946 kubelet[2363]: I1212 22:51:22.580918 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f539930a6dfcc0903d352861378db14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f539930a6dfcc0903d352861378db14\") " pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:22.581013 kubelet[2363]: I1212 22:51:22.580959 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:22.581013 kubelet[2363]: I1212 22:51:22.580980 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:22.581013 kubelet[2363]: I1212 22:51:22.580997 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f539930a6dfcc0903d352861378db14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f539930a6dfcc0903d352861378db14\") " pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:22.581088 kubelet[2363]: I1212 22:51:22.581016 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f539930a6dfcc0903d352861378db14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f539930a6dfcc0903d352861378db14\") " pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:22.581088 kubelet[2363]: I1212 22:51:22.581031 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:22.581088 kubelet[2363]: I1212 22:51:22.581056 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:22.581088 kubelet[2363]: I1212 22:51:22.581073 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:22.581157 kubelet[2363]: I1212 22:51:22.581090 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:22.618410 kubelet[2363]: I1212 22:51:22.618268 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 22:51:22.618744 kubelet[2363]: E1212 22:51:22.618721 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 12 22:51:22.782754 kubelet[2363]: E1212 22:51:22.782649 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Dec 12 22:51:22.842001 kubelet[2363]: E1212 22:51:22.841971 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:22.842672 containerd[1577]: time="2025-12-12T22:51:22.842603897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 12 22:51:22.868199 containerd[1577]: time="2025-12-12T22:51:22.867081238Z" level=info msg="connecting to shim 3fcebe02525c99d6cd8df78f90842ed0d51a4b6cb94cf8186a2e9b2fa2ea9fac" address="unix:///run/containerd/s/1bf903937fc8771bfb380931cf1f243b97d5cb46c977382f8d9cb703b0b0839a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:22.868312 kubelet[2363]: E1212 22:51:22.867820 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:22.868674 containerd[1577]: time="2025-12-12T22:51:22.868617194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 12 22:51:22.873498 kubelet[2363]: E1212 22:51:22.873472 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:22.874271 containerd[1577]: time="2025-12-12T22:51:22.874215366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f539930a6dfcc0903d352861378db14,Namespace:kube-system,Attempt:0,}" Dec 12 22:51:22.895485 systemd[1]: Started cri-containerd-3fcebe02525c99d6cd8df78f90842ed0d51a4b6cb94cf8186a2e9b2fa2ea9fac.scope - libcontainer container 3fcebe02525c99d6cd8df78f90842ed0d51a4b6cb94cf8186a2e9b2fa2ea9fac. Dec 12 22:51:22.913350 containerd[1577]: time="2025-12-12T22:51:22.913179584Z" level=info msg="connecting to shim fd30f09ab8f2114e3fac43e673ed4b6eb8e37ed31ec2162cf35293863a3b875f" address="unix:///run/containerd/s/6c1f50d2bbf70afb89bf8c7dab35fba459a8244a9b7813c4e4efc1d3e27f5e69" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:22.914196 containerd[1577]: time="2025-12-12T22:51:22.914173135Z" level=info msg="connecting to shim 5fa0a7cd2a0435a4185d873a02c6cc397b357eb6b3a7d6fa6f9ee696e5d1fcdc" address="unix:///run/containerd/s/3c2ac8cefd82abd33b714dcb7e6c9431ffd7f9f2dc80af466182774bb42a88b4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:22.918000 audit: BPF prog-id=87 op=LOAD Dec 12 22:51:22.920000 audit: BPF prog-id=88 op=LOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.920000 audit: BPF prog-id=88 op=UNLOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.920000 audit: BPF prog-id=89 op=LOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.920000 audit: BPF prog-id=90 op=LOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.920000 audit: BPF prog-id=90 op=UNLOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.920000 audit: BPF prog-id=89 op=UNLOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.920000 audit: BPF prog-id=91 op=LOAD Dec 12 22:51:22.920000 audit[2419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2408 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636562653032353235633939643663643864663738663930383432 Dec 12 22:51:22.935734 systemd[1]: Started cri-containerd-5fa0a7cd2a0435a4185d873a02c6cc397b357eb6b3a7d6fa6f9ee696e5d1fcdc.scope - libcontainer container 5fa0a7cd2a0435a4185d873a02c6cc397b357eb6b3a7d6fa6f9ee696e5d1fcdc. Dec 12 22:51:22.939088 systemd[1]: Started cri-containerd-fd30f09ab8f2114e3fac43e673ed4b6eb8e37ed31ec2162cf35293863a3b875f.scope - libcontainer container fd30f09ab8f2114e3fac43e673ed4b6eb8e37ed31ec2162cf35293863a3b875f. Dec 12 22:51:22.951344 containerd[1577]: time="2025-12-12T22:51:22.951293885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fcebe02525c99d6cd8df78f90842ed0d51a4b6cb94cf8186a2e9b2fa2ea9fac\"" Dec 12 22:51:22.955000 audit: BPF prog-id=92 op=LOAD Dec 12 22:51:22.956447 kubelet[2363]: E1212 22:51:22.956425 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:22.956000 audit: BPF prog-id=93 op=LOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.956000 audit: BPF prog-id=93 op=UNLOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.956000 audit: BPF prog-id=94 op=LOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.956000 audit: BPF prog-id=95 op=LOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.956000 audit: BPF prog-id=95 op=UNLOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.956000 audit: BPF prog-id=94 op=UNLOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.956000 audit: BPF prog-id=96 op=LOAD Dec 12 22:51:22.956000 audit: BPF prog-id=97 op=LOAD Dec 12 22:51:22.956000 audit[2477]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2448 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333066303961623866323131346533666163343365363733656434 Dec 12 22:51:22.957000 audit: BPF prog-id=98 op=LOAD Dec 12 22:51:22.957000 audit[2478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.957000 audit: BPF prog-id=98 op=UNLOAD Dec 12 22:51:22.957000 audit[2478]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.957000 audit: BPF prog-id=99 op=LOAD Dec 12 22:51:22.957000 audit[2478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.959331 containerd[1577]: time="2025-12-12T22:51:22.959295202Z" level=info msg="CreateContainer within sandbox \"3fcebe02525c99d6cd8df78f90842ed0d51a4b6cb94cf8186a2e9b2fa2ea9fac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 22:51:22.958000 audit: BPF prog-id=100 op=LOAD Dec 12 22:51:22.958000 audit[2478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.959000 audit: BPF prog-id=100 op=UNLOAD Dec 12 22:51:22.959000 audit[2478]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.959000 audit: BPF prog-id=99 op=UNLOAD Dec 12 22:51:22.959000 audit[2478]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.959000 audit: BPF prog-id=101 op=LOAD Dec 12 22:51:22.959000 audit[2478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2456 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:22.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566613061376364326130343335613431383564383733613032633663 Dec 12 22:51:22.973653 containerd[1577]: time="2025-12-12T22:51:22.973620368Z" level=info msg="Container 7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:22.981584 containerd[1577]: time="2025-12-12T22:51:22.981540448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd30f09ab8f2114e3fac43e673ed4b6eb8e37ed31ec2162cf35293863a3b875f\"" Dec 12 22:51:22.984042 kubelet[2363]: E1212 22:51:22.984019 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:22.984681 containerd[1577]: time="2025-12-12T22:51:22.984605346Z" level=info msg="CreateContainer within sandbox \"3fcebe02525c99d6cd8df78f90842ed0d51a4b6cb94cf8186a2e9b2fa2ea9fac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00\"" Dec 12 22:51:22.985228 kubelet[2363]: W1212 22:51:22.985090 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 12 22:51:22.985285 kubelet[2363]: E1212 22:51:22.985245 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Dec 12 22:51:22.985708 containerd[1577]: time="2025-12-12T22:51:22.985656729Z" level=info msg="StartContainer for \"7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00\"" Dec 12 22:51:22.986451 containerd[1577]: time="2025-12-12T22:51:22.986427492Z" level=info msg="CreateContainer within sandbox \"fd30f09ab8f2114e3fac43e673ed4b6eb8e37ed31ec2162cf35293863a3b875f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 22:51:22.987045 containerd[1577]: time="2025-12-12T22:51:22.987016946Z" level=info msg="connecting to shim 7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00" address="unix:///run/containerd/s/1bf903937fc8771bfb380931cf1f243b97d5cb46c977382f8d9cb703b0b0839a" protocol=ttrpc version=3 Dec 12 22:51:22.987968 containerd[1577]: time="2025-12-12T22:51:22.987876600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f539930a6dfcc0903d352861378db14,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fa0a7cd2a0435a4185d873a02c6cc397b357eb6b3a7d6fa6f9ee696e5d1fcdc\"" Dec 12 22:51:22.988825 kubelet[2363]: E1212 22:51:22.988769 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:22.992574 containerd[1577]: time="2025-12-12T22:51:22.990515318Z" level=info msg="CreateContainer within sandbox \"5fa0a7cd2a0435a4185d873a02c6cc397b357eb6b3a7d6fa6f9ee696e5d1fcdc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 22:51:22.995546 containerd[1577]: time="2025-12-12T22:51:22.995242935Z" level=info msg="Container a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:23.003279 containerd[1577]: time="2025-12-12T22:51:23.003219279Z" level=info msg="CreateContainer within sandbox \"fd30f09ab8f2114e3fac43e673ed4b6eb8e37ed31ec2162cf35293863a3b875f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041\"" Dec 12 22:51:23.003652 containerd[1577]: time="2025-12-12T22:51:23.003630051Z" level=info msg="StartContainer for \"a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041\"" Dec 12 22:51:23.005729 containerd[1577]: time="2025-12-12T22:51:23.005703383Z" level=info msg="connecting to shim a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041" address="unix:///run/containerd/s/6c1f50d2bbf70afb89bf8c7dab35fba459a8244a9b7813c4e4efc1d3e27f5e69" protocol=ttrpc version=3 Dec 12 22:51:23.007374 containerd[1577]: time="2025-12-12T22:51:23.006842742Z" level=info msg="Container 420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:23.007695 systemd[1]: Started cri-containerd-7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00.scope - libcontainer container 7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00. Dec 12 22:51:23.015099 containerd[1577]: time="2025-12-12T22:51:23.015060582Z" level=info msg="CreateContainer within sandbox \"5fa0a7cd2a0435a4185d873a02c6cc397b357eb6b3a7d6fa6f9ee696e5d1fcdc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f\"" Dec 12 22:51:23.015682 containerd[1577]: time="2025-12-12T22:51:23.015662315Z" level=info msg="StartContainer for \"420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f\"" Dec 12 22:51:23.016759 containerd[1577]: time="2025-12-12T22:51:23.016732918Z" level=info msg="connecting to shim 420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f" address="unix:///run/containerd/s/3c2ac8cefd82abd33b714dcb7e6c9431ffd7f9f2dc80af466182774bb42a88b4" protocol=ttrpc version=3 Dec 12 22:51:23.018000 audit: BPF prog-id=102 op=LOAD Dec 12 22:51:23.020000 audit: BPF prog-id=103 op=LOAD Dec 12 22:51:23.020000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.020000 audit: BPF prog-id=103 op=UNLOAD Dec 12 22:51:23.020000 audit[2536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.020000 audit: BPF prog-id=104 op=LOAD Dec 12 22:51:23.020000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.021000 audit: BPF prog-id=105 op=LOAD Dec 12 22:51:23.021000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.021000 audit: BPF prog-id=105 op=UNLOAD Dec 12 22:51:23.021000 audit[2536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.021000 audit: BPF prog-id=104 op=UNLOAD Dec 12 22:51:23.021000 audit[2536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.021000 audit: BPF prog-id=106 op=LOAD Dec 12 22:51:23.021000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2408 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762636665646134363838626165303531393638373264323661666430 Dec 12 22:51:23.022695 kubelet[2363]: I1212 22:51:23.022160 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 22:51:23.022695 kubelet[2363]: E1212 22:51:23.022648 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 12 22:51:23.026756 systemd[1]: Started cri-containerd-a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041.scope - libcontainer container a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041. Dec 12 22:51:23.042723 systemd[1]: Started cri-containerd-420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f.scope - libcontainer container 420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f. Dec 12 22:51:23.044000 audit: BPF prog-id=107 op=LOAD Dec 12 22:51:23.045000 audit: BPF prog-id=108 op=LOAD Dec 12 22:51:23.045000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.045000 audit: BPF prog-id=108 op=UNLOAD Dec 12 22:51:23.045000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.046000 audit: BPF prog-id=109 op=LOAD Dec 12 22:51:23.046000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.046000 audit: BPF prog-id=110 op=LOAD Dec 12 22:51:23.046000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.046000 audit: BPF prog-id=110 op=UNLOAD Dec 12 22:51:23.046000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.046000 audit: BPF prog-id=109 op=UNLOAD Dec 12 22:51:23.046000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.046000 audit: BPF prog-id=111 op=LOAD Dec 12 22:51:23.046000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2448 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134653562306261396263393037356230393064383864356139386332 Dec 12 22:51:23.058000 audit: BPF prog-id=112 op=LOAD Dec 12 22:51:23.059000 audit: BPF prog-id=113 op=LOAD Dec 12 22:51:23.059000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.059000 audit: BPF prog-id=113 op=UNLOAD Dec 12 22:51:23.059000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.060000 audit: BPF prog-id=114 op=LOAD Dec 12 22:51:23.060000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.060000 audit: BPF prog-id=115 op=LOAD Dec 12 22:51:23.060000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.060000 audit: BPF prog-id=115 op=UNLOAD Dec 12 22:51:23.060000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.060000 audit: BPF prog-id=114 op=UNLOAD Dec 12 22:51:23.060000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.060000 audit: BPF prog-id=116 op=LOAD Dec 12 22:51:23.060000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2456 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:23.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432306132643731616336623538373939636332626237343133356461 Dec 12 22:51:23.061446 containerd[1577]: time="2025-12-12T22:51:23.061318687Z" level=info msg="StartContainer for \"7bcfeda4688bae05196872d26afd01257f3b6e3cc50dc21528e5bd254cebbd00\" returns successfully" Dec 12 22:51:23.076316 containerd[1577]: time="2025-12-12T22:51:23.076277519Z" level=info msg="StartContainer for \"a4e5b0ba9bc9075b090d88d5a98c203f6a8f6c07fcb731badf26133d03a2d041\" returns successfully" Dec 12 22:51:23.098821 containerd[1577]: time="2025-12-12T22:51:23.098784184Z" level=info msg="StartContainer for \"420a2d71ac6b58799cc2bb74135da58025690b71253231b8c9b66f0fd49f261f\" returns successfully" Dec 12 22:51:23.203559 kubelet[2363]: E1212 22:51:23.203510 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:23.203685 kubelet[2363]: E1212 22:51:23.203667 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:23.209031 kubelet[2363]: E1212 22:51:23.209004 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:23.209160 kubelet[2363]: E1212 22:51:23.209141 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:23.209338 kubelet[2363]: E1212 22:51:23.209318 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:23.209441 kubelet[2363]: E1212 22:51:23.209426 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:23.825055 kubelet[2363]: I1212 22:51:23.825017 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 22:51:24.211363 kubelet[2363]: E1212 22:51:24.211265 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:24.211452 kubelet[2363]: E1212 22:51:24.211405 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:24.212076 kubelet[2363]: E1212 22:51:24.212046 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:24.212162 kubelet[2363]: E1212 22:51:24.212147 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:24.822300 kubelet[2363]: E1212 22:51:24.822252 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 22:51:24.870653 kubelet[2363]: E1212 22:51:24.870621 2363 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 22:51:24.870953 kubelet[2363]: E1212 22:51:24.870766 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:24.910025 kubelet[2363]: I1212 22:51:24.909986 2363 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 22:51:24.979777 kubelet[2363]: I1212 22:51:24.979722 2363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:24.990167 kubelet[2363]: E1212 22:51:24.989197 2363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:24.990167 kubelet[2363]: I1212 22:51:24.989230 2363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:24.990780 kubelet[2363]: E1212 22:51:24.990753 2363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:24.990780 kubelet[2363]: I1212 22:51:24.990779 2363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:24.992362 kubelet[2363]: E1212 22:51:24.992331 2363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:25.169308 kubelet[2363]: I1212 22:51:25.169179 2363 apiserver.go:52] "Watching apiserver" Dec 12 22:51:25.179011 kubelet[2363]: I1212 22:51:25.178976 2363 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 22:51:25.210689 kubelet[2363]: I1212 22:51:25.210661 2363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:25.212753 kubelet[2363]: E1212 22:51:25.212685 2363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:25.212889 kubelet[2363]: E1212 22:51:25.212845 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:26.656328 kubelet[2363]: I1212 22:51:26.656291 2363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:26.664496 kubelet[2363]: E1212 22:51:26.664392 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:27.056643 systemd[1]: Reload requested from client PID 2641 ('systemctl') (unit session-8.scope)... Dec 12 22:51:27.056661 systemd[1]: Reloading... Dec 12 22:51:27.119555 zram_generator::config[2685]: No configuration found. Dec 12 22:51:27.213806 kubelet[2363]: E1212 22:51:27.213761 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:27.406137 systemd[1]: Reloading finished in 349 ms. Dec 12 22:51:27.435234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 22:51:27.442953 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 22:51:27.443262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:27.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:27.443336 systemd[1]: kubelet.service: Consumed 933ms CPU time, 127.8M memory peak. Dec 12 22:51:27.443965 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 22:51:27.444011 kernel: audit: type=1131 audit(1765579887.442:405): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:27.446795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 22:51:27.446000 audit: BPF prog-id=117 op=LOAD Dec 12 22:51:27.446000 audit: BPF prog-id=84 op=UNLOAD Dec 12 22:51:27.448957 kernel: audit: type=1334 audit(1765579887.446:406): prog-id=117 op=LOAD Dec 12 22:51:27.448995 kernel: audit: type=1334 audit(1765579887.446:407): prog-id=84 op=UNLOAD Dec 12 22:51:27.449016 kernel: audit: type=1334 audit(1765579887.447:408): prog-id=118 op=LOAD Dec 12 22:51:27.447000 audit: BPF prog-id=118 op=LOAD Dec 12 22:51:27.451307 kernel: audit: type=1334 audit(1765579887.449:409): prog-id=119 op=LOAD Dec 12 22:51:27.451378 kernel: audit: type=1334 audit(1765579887.449:410): prog-id=85 op=UNLOAD Dec 12 22:51:27.451397 kernel: audit: type=1334 audit(1765579887.449:411): prog-id=86 op=UNLOAD Dec 12 22:51:27.451413 kernel: audit: type=1334 audit(1765579887.449:412): prog-id=120 op=LOAD Dec 12 22:51:27.451427 kernel: audit: type=1334 audit(1765579887.449:413): prog-id=77 op=UNLOAD Dec 12 22:51:27.449000 audit: BPF prog-id=119 op=LOAD Dec 12 22:51:27.449000 audit: BPF prog-id=85 op=UNLOAD Dec 12 22:51:27.449000 audit: BPF prog-id=86 op=UNLOAD Dec 12 22:51:27.449000 audit: BPF prog-id=120 op=LOAD Dec 12 22:51:27.449000 audit: BPF prog-id=77 op=UNLOAD Dec 12 22:51:27.452000 audit: BPF prog-id=121 op=LOAD Dec 12 22:51:27.452000 audit: BPF prog-id=74 op=UNLOAD Dec 12 22:51:27.452000 audit: BPF prog-id=122 op=LOAD Dec 12 22:51:27.452000 audit: BPF prog-id=123 op=LOAD Dec 12 22:51:27.452000 audit: BPF prog-id=75 op=UNLOAD Dec 12 22:51:27.452000 audit: BPF prog-id=76 op=UNLOAD Dec 12 22:51:27.453532 kernel: audit: type=1334 audit(1765579887.452:414): prog-id=121 op=LOAD Dec 12 22:51:27.453000 audit: BPF prog-id=124 op=LOAD Dec 12 22:51:27.453000 audit: BPF prog-id=81 op=UNLOAD Dec 12 22:51:27.454000 audit: BPF prog-id=125 op=LOAD Dec 12 22:51:27.454000 audit: BPF prog-id=78 op=UNLOAD Dec 12 22:51:27.454000 audit: BPF prog-id=126 op=LOAD Dec 12 22:51:27.454000 audit: BPF prog-id=127 op=LOAD Dec 12 22:51:27.454000 audit: BPF prog-id=79 op=UNLOAD Dec 12 22:51:27.454000 audit: BPF prog-id=80 op=UNLOAD Dec 12 22:51:27.455000 audit: BPF prog-id=128 op=LOAD Dec 12 22:51:27.455000 audit: BPF prog-id=71 op=UNLOAD Dec 12 22:51:27.455000 audit: BPF prog-id=129 op=LOAD Dec 12 22:51:27.455000 audit: BPF prog-id=130 op=LOAD Dec 12 22:51:27.455000 audit: BPF prog-id=72 op=UNLOAD Dec 12 22:51:27.455000 audit: BPF prog-id=73 op=UNLOAD Dec 12 22:51:27.477000 audit: BPF prog-id=131 op=LOAD Dec 12 22:51:27.477000 audit: BPF prog-id=70 op=UNLOAD Dec 12 22:51:27.478000 audit: BPF prog-id=132 op=LOAD Dec 12 22:51:27.478000 audit: BPF prog-id=133 op=LOAD Dec 12 22:51:27.478000 audit: BPF prog-id=82 op=UNLOAD Dec 12 22:51:27.478000 audit: BPF prog-id=83 op=UNLOAD Dec 12 22:51:27.479000 audit: BPF prog-id=134 op=LOAD Dec 12 22:51:27.479000 audit: BPF prog-id=67 op=UNLOAD Dec 12 22:51:27.479000 audit: BPF prog-id=135 op=LOAD Dec 12 22:51:27.479000 audit: BPF prog-id=136 op=LOAD Dec 12 22:51:27.479000 audit: BPF prog-id=68 op=UNLOAD Dec 12 22:51:27.479000 audit: BPF prog-id=69 op=UNLOAD Dec 12 22:51:27.647308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 22:51:27.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:27.651640 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 22:51:27.693661 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 22:51:27.693661 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 22:51:27.693661 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 22:51:27.693661 kubelet[2729]: I1212 22:51:27.693626 2729 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 22:51:27.700285 kubelet[2729]: I1212 22:51:27.700221 2729 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 22:51:27.700285 kubelet[2729]: I1212 22:51:27.700248 2729 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 22:51:27.700753 kubelet[2729]: I1212 22:51:27.700729 2729 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 22:51:27.703546 kubelet[2729]: I1212 22:51:27.703498 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 22:51:27.705942 kubelet[2729]: I1212 22:51:27.705913 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 22:51:27.709506 kubelet[2729]: I1212 22:51:27.709484 2729 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 22:51:27.712281 kubelet[2729]: I1212 22:51:27.712250 2729 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 22:51:27.712502 kubelet[2729]: I1212 22:51:27.712474 2729 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 22:51:27.712677 kubelet[2729]: I1212 22:51:27.712502 2729 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 22:51:27.712753 kubelet[2729]: I1212 22:51:27.712688 2729 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 22:51:27.712753 kubelet[2729]: I1212 22:51:27.712698 2729 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 22:51:27.712753 kubelet[2729]: I1212 22:51:27.712738 2729 state_mem.go:36] "Initialized new in-memory state store" Dec 12 22:51:27.712875 kubelet[2729]: I1212 22:51:27.712863 2729 kubelet.go:446] "Attempting to sync node with API server" Dec 12 22:51:27.712902 kubelet[2729]: I1212 22:51:27.712881 2729 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 22:51:27.712902 kubelet[2729]: I1212 22:51:27.712901 2729 kubelet.go:352] "Adding apiserver pod source" Dec 12 22:51:27.712937 kubelet[2729]: I1212 22:51:27.712910 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 22:51:27.714419 kubelet[2729]: I1212 22:51:27.714397 2729 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 22:51:27.717328 kubelet[2729]: I1212 22:51:27.717302 2729 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 22:51:27.718167 kubelet[2729]: I1212 22:51:27.718148 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 22:51:27.718214 kubelet[2729]: I1212 22:51:27.718192 2729 server.go:1287] "Started kubelet" Dec 12 22:51:27.719992 kubelet[2729]: I1212 22:51:27.719959 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 22:51:27.720879 kubelet[2729]: I1212 22:51:27.720586 2729 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 22:51:27.720879 kubelet[2729]: I1212 22:51:27.720741 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 22:51:27.721001 kubelet[2729]: I1212 22:51:27.720976 2729 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 22:51:27.721281 kubelet[2729]: I1212 22:51:27.721256 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 22:51:27.722707 kubelet[2729]: I1212 22:51:27.722679 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 22:51:27.722781 kubelet[2729]: E1212 22:51:27.722765 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 22:51:27.722954 kubelet[2729]: I1212 22:51:27.722941 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 22:51:27.723631 kubelet[2729]: I1212 22:51:27.723606 2729 reconciler.go:26] "Reconciler: start to sync state" Dec 12 22:51:27.727073 kubelet[2729]: I1212 22:51:27.726978 2729 server.go:479] "Adding debug handlers to kubelet server" Dec 12 22:51:27.728364 kubelet[2729]: I1212 22:51:27.728332 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 22:51:27.729322 kubelet[2729]: I1212 22:51:27.729207 2729 factory.go:221] Registration of the containerd container factory successfully Dec 12 22:51:27.729322 kubelet[2729]: I1212 22:51:27.729228 2729 factory.go:221] Registration of the systemd container factory successfully Dec 12 22:51:27.739637 kubelet[2729]: I1212 22:51:27.739593 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 22:51:27.740966 kubelet[2729]: I1212 22:51:27.740926 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 22:51:27.740966 kubelet[2729]: I1212 22:51:27.740954 2729 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 22:51:27.741063 kubelet[2729]: I1212 22:51:27.740974 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 22:51:27.741063 kubelet[2729]: I1212 22:51:27.740985 2729 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 22:51:27.741176 kubelet[2729]: E1212 22:51:27.741036 2729 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 22:51:27.749687 kubelet[2729]: E1212 22:51:27.749493 2729 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 22:51:27.781759 kubelet[2729]: I1212 22:51:27.781732 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 22:51:27.781938 kubelet[2729]: I1212 22:51:27.781923 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 22:51:27.782006 kubelet[2729]: I1212 22:51:27.781997 2729 state_mem.go:36] "Initialized new in-memory state store" Dec 12 22:51:27.782237 kubelet[2729]: I1212 22:51:27.782218 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 22:51:27.782341 kubelet[2729]: I1212 22:51:27.782312 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 22:51:27.782399 kubelet[2729]: I1212 22:51:27.782391 2729 policy_none.go:49] "None policy: Start" Dec 12 22:51:27.782449 kubelet[2729]: I1212 22:51:27.782442 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 22:51:27.782498 kubelet[2729]: I1212 22:51:27.782491 2729 state_mem.go:35] "Initializing new in-memory state store" Dec 12 22:51:27.782690 kubelet[2729]: I1212 22:51:27.782675 2729 state_mem.go:75] "Updated machine memory state" Dec 12 22:51:27.786096 kubelet[2729]: I1212 22:51:27.786062 2729 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 22:51:27.786379 kubelet[2729]: I1212 22:51:27.786364 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 22:51:27.786504 kubelet[2729]: I1212 22:51:27.786465 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 22:51:27.786941 kubelet[2729]: I1212 22:51:27.786923 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 22:51:27.789733 kubelet[2729]: E1212 22:51:27.788560 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 22:51:27.842337 kubelet[2729]: I1212 22:51:27.842290 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:27.842481 kubelet[2729]: I1212 22:51:27.842456 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:27.842573 kubelet[2729]: I1212 22:51:27.842558 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:27.850837 kubelet[2729]: E1212 22:51:27.850805 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:27.892346 kubelet[2729]: I1212 22:51:27.892320 2729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 22:51:27.899346 kubelet[2729]: I1212 22:51:27.899299 2729 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 22:51:27.899458 kubelet[2729]: I1212 22:51:27.899416 2729 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 22:51:27.925468 kubelet[2729]: I1212 22:51:27.925418 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:27.925468 kubelet[2729]: I1212 22:51:27.925460 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f539930a6dfcc0903d352861378db14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f539930a6dfcc0903d352861378db14\") " pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:27.925653 kubelet[2729]: I1212 22:51:27.925481 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:27.925653 kubelet[2729]: I1212 22:51:27.925502 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:27.925653 kubelet[2729]: I1212 22:51:27.925546 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:27.925653 kubelet[2729]: I1212 22:51:27.925563 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f539930a6dfcc0903d352861378db14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f539930a6dfcc0903d352861378db14\") " pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:27.925653 kubelet[2729]: I1212 22:51:27.925577 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f539930a6dfcc0903d352861378db14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f539930a6dfcc0903d352861378db14\") " pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:27.925769 kubelet[2729]: I1212 22:51:27.925593 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:27.925769 kubelet[2729]: I1212 22:51:27.925607 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 22:51:28.147646 kubelet[2729]: E1212 22:51:28.147535 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:28.149646 kubelet[2729]: E1212 22:51:28.149612 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:28.151334 kubelet[2729]: E1212 22:51:28.151292 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:28.713615 kubelet[2729]: I1212 22:51:28.713560 2729 apiserver.go:52] "Watching apiserver" Dec 12 22:51:28.723267 kubelet[2729]: I1212 22:51:28.723218 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 22:51:28.766134 kubelet[2729]: I1212 22:51:28.765989 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:28.766134 kubelet[2729]: I1212 22:51:28.766069 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:28.766389 kubelet[2729]: E1212 22:51:28.766359 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:28.773717 kubelet[2729]: E1212 22:51:28.772887 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 22:51:28.773717 kubelet[2729]: E1212 22:51:28.773056 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:28.773917 kubelet[2729]: E1212 22:51:28.773704 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 22:51:28.773917 kubelet[2729]: E1212 22:51:28.773906 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:28.784912 kubelet[2729]: I1212 22:51:28.784733 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.784702481 podStartE2EDuration="2.784702481s" podCreationTimestamp="2025-12-12 22:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 22:51:28.784346894 +0000 UTC m=+1.128936423" watchObservedRunningTime="2025-12-12 22:51:28.784702481 +0000 UTC m=+1.129291970" Dec 12 22:51:28.792014 kubelet[2729]: I1212 22:51:28.791918 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.791903427 podStartE2EDuration="1.791903427s" podCreationTimestamp="2025-12-12 22:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 22:51:28.791640159 +0000 UTC m=+1.136229688" watchObservedRunningTime="2025-12-12 22:51:28.791903427 +0000 UTC m=+1.136492956" Dec 12 22:51:28.801228 kubelet[2729]: I1212 22:51:28.801170 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8011303239999998 podStartE2EDuration="1.801130324s" podCreationTimestamp="2025-12-12 22:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 22:51:28.801024832 +0000 UTC m=+1.145614361" watchObservedRunningTime="2025-12-12 22:51:28.801130324 +0000 UTC m=+1.145719893" Dec 12 22:51:29.767608 kubelet[2729]: E1212 22:51:29.767576 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:29.767980 kubelet[2729]: E1212 22:51:29.767581 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:31.171880 kubelet[2729]: E1212 22:51:31.171784 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:32.932653 kubelet[2729]: I1212 22:51:32.932609 2729 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 22:51:32.933563 containerd[1577]: time="2025-12-12T22:51:32.933014178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 22:51:32.933861 kubelet[2729]: I1212 22:51:32.933178 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 22:51:33.648474 systemd[1]: Created slice kubepods-besteffort-poda177a890_bd7a_4a23_b629_04d4c4075d37.slice - libcontainer container kubepods-besteffort-poda177a890_bd7a_4a23_b629_04d4c4075d37.slice. Dec 12 22:51:33.663589 kubelet[2729]: I1212 22:51:33.663553 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzmc\" (UniqueName: \"kubernetes.io/projected/a177a890-bd7a-4a23-b629-04d4c4075d37-kube-api-access-kqzmc\") pod \"kube-proxy-xwjzp\" (UID: \"a177a890-bd7a-4a23-b629-04d4c4075d37\") " pod="kube-system/kube-proxy-xwjzp" Dec 12 22:51:33.663706 kubelet[2729]: I1212 22:51:33.663595 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a177a890-bd7a-4a23-b629-04d4c4075d37-kube-proxy\") pod \"kube-proxy-xwjzp\" (UID: \"a177a890-bd7a-4a23-b629-04d4c4075d37\") " pod="kube-system/kube-proxy-xwjzp" Dec 12 22:51:33.663706 kubelet[2729]: I1212 22:51:33.663616 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a177a890-bd7a-4a23-b629-04d4c4075d37-xtables-lock\") pod \"kube-proxy-xwjzp\" (UID: \"a177a890-bd7a-4a23-b629-04d4c4075d37\") " pod="kube-system/kube-proxy-xwjzp" Dec 12 22:51:33.663706 kubelet[2729]: I1212 22:51:33.663632 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a177a890-bd7a-4a23-b629-04d4c4075d37-lib-modules\") pod \"kube-proxy-xwjzp\" (UID: \"a177a890-bd7a-4a23-b629-04d4c4075d37\") " pod="kube-system/kube-proxy-xwjzp" Dec 12 22:51:33.773497 kubelet[2729]: E1212 22:51:33.773445 2729 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 22:51:33.773497 kubelet[2729]: E1212 22:51:33.773492 2729 projected.go:194] Error preparing data for projected volume kube-api-access-kqzmc for pod kube-system/kube-proxy-xwjzp: configmap "kube-root-ca.crt" not found Dec 12 22:51:33.773786 kubelet[2729]: E1212 22:51:33.773565 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a177a890-bd7a-4a23-b629-04d4c4075d37-kube-api-access-kqzmc podName:a177a890-bd7a-4a23-b629-04d4c4075d37 nodeName:}" failed. No retries permitted until 2025-12-12 22:51:34.273544792 +0000 UTC m=+6.618134321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kqzmc" (UniqueName: "kubernetes.io/projected/a177a890-bd7a-4a23-b629-04d4c4075d37-kube-api-access-kqzmc") pod "kube-proxy-xwjzp" (UID: "a177a890-bd7a-4a23-b629-04d4c4075d37") : configmap "kube-root-ca.crt" not found Dec 12 22:51:34.057641 systemd[1]: Created slice kubepods-besteffort-podc5bd5784_f4fd_4e03_b21a_1226f9d8351b.slice - libcontainer container kubepods-besteffort-podc5bd5784_f4fd_4e03_b21a_1226f9d8351b.slice. Dec 12 22:51:34.066324 kubelet[2729]: I1212 22:51:34.066285 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c5bd5784-f4fd-4e03-b21a-1226f9d8351b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4tl6n\" (UID: \"c5bd5784-f4fd-4e03-b21a-1226f9d8351b\") " pod="tigera-operator/tigera-operator-7dcd859c48-4tl6n" Dec 12 22:51:34.066324 kubelet[2729]: I1212 22:51:34.066328 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrz6\" (UniqueName: \"kubernetes.io/projected/c5bd5784-f4fd-4e03-b21a-1226f9d8351b-kube-api-access-jnrz6\") pod \"tigera-operator-7dcd859c48-4tl6n\" (UID: \"c5bd5784-f4fd-4e03-b21a-1226f9d8351b\") " pod="tigera-operator/tigera-operator-7dcd859c48-4tl6n" Dec 12 22:51:34.361597 containerd[1577]: time="2025-12-12T22:51:34.361462883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4tl6n,Uid:c5bd5784-f4fd-4e03-b21a-1226f9d8351b,Namespace:tigera-operator,Attempt:0,}" Dec 12 22:51:34.383941 containerd[1577]: time="2025-12-12T22:51:34.383886496Z" level=info msg="connecting to shim 6e1cac764f998d7df3376eb036f343a3ee9e38f5c01b2e5c8fcf1929683807f1" address="unix:///run/containerd/s/b03a60dc4987f0676f40935107e3dee6439029ad6c1f797c0f4ff66b5a9d765c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:34.422795 systemd[1]: Started cri-containerd-6e1cac764f998d7df3376eb036f343a3ee9e38f5c01b2e5c8fcf1929683807f1.scope - libcontainer container 6e1cac764f998d7df3376eb036f343a3ee9e38f5c01b2e5c8fcf1929683807f1. Dec 12 22:51:34.432000 audit: BPF prog-id=137 op=LOAD Dec 12 22:51:34.435137 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 22:51:34.435194 kernel: audit: type=1334 audit(1765579894.432:447): prog-id=137 op=LOAD Dec 12 22:51:34.435533 kernel: audit: type=1334 audit(1765579894.433:448): prog-id=138 op=LOAD Dec 12 22:51:34.433000 audit: BPF prog-id=138 op=LOAD Dec 12 22:51:34.433000 audit[2803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.439824 kernel: audit: type=1300 audit(1765579894.433:448): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.439889 kernel: audit: type=1327 audit(1765579894.433:448): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.434000 audit: BPF prog-id=138 op=UNLOAD Dec 12 22:51:34.444279 kernel: audit: type=1334 audit(1765579894.434:449): prog-id=138 op=UNLOAD Dec 12 22:51:34.444329 kernel: audit: type=1300 audit(1765579894.434:449): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.434000 audit[2803]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.434000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.434000 audit: BPF prog-id=139 op=LOAD Dec 12 22:51:34.451772 kernel: audit: type=1327 audit(1765579894.434:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.451895 kernel: audit: type=1334 audit(1765579894.434:450): prog-id=139 op=LOAD Dec 12 22:51:34.451922 kernel: audit: type=1300 audit(1765579894.434:450): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.434000 audit[2803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.434000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.460470 kernel: audit: type=1327 audit(1765579894.434:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.434000 audit: BPF prog-id=140 op=LOAD Dec 12 22:51:34.434000 audit[2803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.434000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.438000 audit: BPF prog-id=140 op=UNLOAD Dec 12 22:51:34.438000 audit[2803]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.438000 audit: BPF prog-id=139 op=UNLOAD Dec 12 22:51:34.438000 audit[2803]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.438000 audit: BPF prog-id=141 op=LOAD Dec 12 22:51:34.438000 audit[2803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2792 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316361633736346639393864376466333337366562303336663334 Dec 12 22:51:34.481471 containerd[1577]: time="2025-12-12T22:51:34.481433552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4tl6n,Uid:c5bd5784-f4fd-4e03-b21a-1226f9d8351b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e1cac764f998d7df3376eb036f343a3ee9e38f5c01b2e5c8fcf1929683807f1\"" Dec 12 22:51:34.483511 containerd[1577]: time="2025-12-12T22:51:34.483474526Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 22:51:34.563653 kubelet[2729]: E1212 22:51:34.563079 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:34.564039 containerd[1577]: time="2025-12-12T22:51:34.564002305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwjzp,Uid:a177a890-bd7a-4a23-b629-04d4c4075d37,Namespace:kube-system,Attempt:0,}" Dec 12 22:51:34.583289 containerd[1577]: time="2025-12-12T22:51:34.583248215Z" level=info msg="connecting to shim 0fa487133313fefbff4d3a89c0ec43dcb68d5a500fc0e7efda0f86035dcad582" address="unix:///run/containerd/s/0be295fef58247819efb4805df4991ebfe17da13e4ec61e399f9ca96fb529b34" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:34.612842 systemd[1]: Started cri-containerd-0fa487133313fefbff4d3a89c0ec43dcb68d5a500fc0e7efda0f86035dcad582.scope - libcontainer container 0fa487133313fefbff4d3a89c0ec43dcb68d5a500fc0e7efda0f86035dcad582. Dec 12 22:51:34.621000 audit: BPF prog-id=142 op=LOAD Dec 12 22:51:34.622000 audit: BPF prog-id=143 op=LOAD Dec 12 22:51:34.622000 audit[2849]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.622000 audit: BPF prog-id=143 op=UNLOAD Dec 12 22:51:34.622000 audit[2849]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.622000 audit: BPF prog-id=144 op=LOAD Dec 12 22:51:34.622000 audit[2849]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.622000 audit: BPF prog-id=145 op=LOAD Dec 12 22:51:34.622000 audit[2849]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.623000 audit: BPF prog-id=145 op=UNLOAD Dec 12 22:51:34.623000 audit[2849]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.623000 audit: BPF prog-id=144 op=UNLOAD Dec 12 22:51:34.623000 audit[2849]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.623000 audit: BPF prog-id=146 op=LOAD Dec 12 22:51:34.623000 audit[2849]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2838 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613438373133333331336665666266663464336138396330656334 Dec 12 22:51:34.637790 containerd[1577]: time="2025-12-12T22:51:34.637755935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwjzp,Uid:a177a890-bd7a-4a23-b629-04d4c4075d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fa487133313fefbff4d3a89c0ec43dcb68d5a500fc0e7efda0f86035dcad582\"" Dec 12 22:51:34.638642 kubelet[2729]: E1212 22:51:34.638610 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:34.641158 containerd[1577]: time="2025-12-12T22:51:34.641131383Z" level=info msg="CreateContainer within sandbox \"0fa487133313fefbff4d3a89c0ec43dcb68d5a500fc0e7efda0f86035dcad582\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 22:51:34.653138 containerd[1577]: time="2025-12-12T22:51:34.652071004Z" level=info msg="Container 95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:34.659820 containerd[1577]: time="2025-12-12T22:51:34.659770809Z" level=info msg="CreateContainer within sandbox \"0fa487133313fefbff4d3a89c0ec43dcb68d5a500fc0e7efda0f86035dcad582\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a\"" Dec 12 22:51:34.660824 containerd[1577]: time="2025-12-12T22:51:34.660797999Z" level=info msg="StartContainer for \"95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a\"" Dec 12 22:51:34.662459 containerd[1577]: time="2025-12-12T22:51:34.662431874Z" level=info msg="connecting to shim 95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a" address="unix:///run/containerd/s/0be295fef58247819efb4805df4991ebfe17da13e4ec61e399f9ca96fb529b34" protocol=ttrpc version=3 Dec 12 22:51:34.699791 systemd[1]: Started cri-containerd-95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a.scope - libcontainer container 95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a. Dec 12 22:51:34.760000 audit: BPF prog-id=147 op=LOAD Dec 12 22:51:34.760000 audit[2875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2838 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623066656263633435643635353464613031383231386465313935 Dec 12 22:51:34.760000 audit: BPF prog-id=148 op=LOAD Dec 12 22:51:34.760000 audit[2875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2838 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623066656263633435643635353464613031383231386465313935 Dec 12 22:51:34.760000 audit: BPF prog-id=148 op=UNLOAD Dec 12 22:51:34.760000 audit[2875]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2838 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623066656263633435643635353464613031383231386465313935 Dec 12 22:51:34.760000 audit: BPF prog-id=147 op=UNLOAD Dec 12 22:51:34.760000 audit[2875]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2838 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623066656263633435643635353464613031383231386465313935 Dec 12 22:51:34.761000 audit: BPF prog-id=149 op=LOAD Dec 12 22:51:34.761000 audit[2875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2838 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623066656263633435643635353464613031383231386465313935 Dec 12 22:51:34.789562 containerd[1577]: time="2025-12-12T22:51:34.789445236Z" level=info msg="StartContainer for \"95b0febcc45d6554da018218de195c8d64aba0401ca88b70859078eae806111a\" returns successfully" Dec 12 22:51:34.953000 audit[2940]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:34.953000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4dacb50 a2=0 a3=1 items=0 ppid=2887 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 22:51:34.954000 audit[2941]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:34.954000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff58cd1a0 a2=0 a3=1 items=0 ppid=2887 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 22:51:34.956000 audit[2943]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:34.956000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca5d7f80 a2=0 a3=1 items=0 ppid=2887 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 22:51:34.956000 audit[2942]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:34.956000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff690e6b0 a2=0 a3=1 items=0 ppid=2887 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 22:51:34.960000 audit[2944]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:34.960000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd88ddb0 a2=0 a3=1 items=0 ppid=2887 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 22:51:34.961000 audit[2946]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:34.961000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6850a00 a2=0 a3=1 items=0 ppid=2887 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:34.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 22:51:35.059000 audit[2947]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.059000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe7648480 a2=0 a3=1 items=0 ppid=2887 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 22:51:35.062000 audit[2949]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.062000 audit[2949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd028b430 a2=0 a3=1 items=0 ppid=2887 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 22:51:35.066000 audit[2952]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.066000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdf6b2190 a2=0 a3=1 items=0 ppid=2887 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 22:51:35.067000 audit[2953]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.067000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3b35d70 a2=0 a3=1 items=0 ppid=2887 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 22:51:35.069000 audit[2955]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.069000 audit[2955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca6a9f80 a2=0 a3=1 items=0 ppid=2887 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 22:51:35.070000 audit[2956]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.070000 audit[2956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd9dd450 a2=0 a3=1 items=0 ppid=2887 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 22:51:35.073000 audit[2958]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.073000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffccb2e230 a2=0 a3=1 items=0 ppid=2887 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.073000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 22:51:35.076000 audit[2961]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.076000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd7be3730 a2=0 a3=1 items=0 ppid=2887 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 22:51:35.078000 audit[2962]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.078000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd76d1aa0 a2=0 a3=1 items=0 ppid=2887 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 22:51:35.080000 audit[2964]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.080000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebef14c0 a2=0 a3=1 items=0 ppid=2887 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 22:51:35.081000 audit[2965]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.081000 audit[2965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7c78e90 a2=0 a3=1 items=0 ppid=2887 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 22:51:35.084000 audit[2967]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.084000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffddfd83f0 a2=0 a3=1 items=0 ppid=2887 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 22:51:35.087000 audit[2970]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.087000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4aa80d0 a2=0 a3=1 items=0 ppid=2887 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 22:51:35.093000 audit[2973]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.093000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdd33c4c0 a2=0 a3=1 items=0 ppid=2887 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 22:51:35.094000 audit[2974]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.094000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe55c14b0 a2=0 a3=1 items=0 ppid=2887 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 22:51:35.097000 audit[2976]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.097000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd0a7a150 a2=0 a3=1 items=0 ppid=2887 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 22:51:35.101000 audit[2979]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.101000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe23b0cc0 a2=0 a3=1 items=0 ppid=2887 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 22:51:35.102000 audit[2980]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.102000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7e5d430 a2=0 a3=1 items=0 ppid=2887 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 22:51:35.104000 audit[2982]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 22:51:35.104000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffd638100 a2=0 a3=1 items=0 ppid=2887 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 22:51:35.125000 audit[2988]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:35.125000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd7ca6d90 a2=0 a3=1 items=0 ppid=2887 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.125000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:35.137000 audit[2988]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:35.137000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd7ca6d90 a2=0 a3=1 items=0 ppid=2887 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:35.139000 audit[2993]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.139000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcf30b6d0 a2=0 a3=1 items=0 ppid=2887 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 22:51:35.142000 audit[2995]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.142000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe436db90 a2=0 a3=1 items=0 ppid=2887 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 22:51:35.145000 audit[2998]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.145000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd66956c0 a2=0 a3=1 items=0 ppid=2887 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 22:51:35.146000 audit[2999]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.146000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4603320 a2=0 a3=1 items=0 ppid=2887 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 22:51:35.149000 audit[3001]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.149000 audit[3001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce9aa430 a2=0 a3=1 items=0 ppid=2887 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 22:51:35.151000 audit[3002]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.151000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8d08d00 a2=0 a3=1 items=0 ppid=2887 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 22:51:35.153000 audit[3004]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.153000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1e21b70 a2=0 a3=1 items=0 ppid=2887 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 22:51:35.156000 audit[3007]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.156000 audit[3007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd70c21f0 a2=0 a3=1 items=0 ppid=2887 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 22:51:35.157000 audit[3008]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.157000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb7f3dc0 a2=0 a3=1 items=0 ppid=2887 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 22:51:35.160000 audit[3010]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.160000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb3b5a00 a2=0 a3=1 items=0 ppid=2887 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 22:51:35.161000 audit[3011]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.161000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc708b0e0 a2=0 a3=1 items=0 ppid=2887 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 22:51:35.164000 audit[3013]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.164000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd6debc0 a2=0 a3=1 items=0 ppid=2887 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 22:51:35.168000 audit[3016]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.168000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe13f4000 a2=0 a3=1 items=0 ppid=2887 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 22:51:35.173000 audit[3019]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.173000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffc386d60 a2=0 a3=1 items=0 ppid=2887 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 22:51:35.175000 audit[3020]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.175000 audit[3020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd29f9c00 a2=0 a3=1 items=0 ppid=2887 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 22:51:35.178000 audit[3022]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.178000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff80b7490 a2=0 a3=1 items=0 ppid=2887 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 22:51:35.184000 audit[3025]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.184000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd4e4900 a2=0 a3=1 items=0 ppid=2887 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 22:51:35.185000 audit[3026]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.185000 audit[3026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf1a8c20 a2=0 a3=1 items=0 ppid=2887 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 22:51:35.188000 audit[3028]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.188000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc52c90e0 a2=0 a3=1 items=0 ppid=2887 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 22:51:35.189000 audit[3029]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.189000 audit[3029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcac6c0d0 a2=0 a3=1 items=0 ppid=2887 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 22:51:35.192000 audit[3031]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.192000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdaed1a80 a2=0 a3=1 items=0 ppid=2887 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 22:51:35.196000 audit[3034]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 22:51:35.196000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcdb4f610 a2=0 a3=1 items=0 ppid=2887 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 22:51:35.199000 audit[3036]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 22:51:35.199000 audit[3036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffffc95a750 a2=0 a3=1 items=0 ppid=2887 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.199000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:35.199000 audit[3036]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 22:51:35.199000 audit[3036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffffc95a750 a2=0 a3=1 items=0 ppid=2887 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.199000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:35.474815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439785287.mount: Deactivated successfully. Dec 12 22:51:35.785878 kubelet[2729]: E1212 22:51:35.785729 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:35.894150 containerd[1577]: time="2025-12-12T22:51:35.894104743Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:35.895374 containerd[1577]: time="2025-12-12T22:51:35.895322080Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Dec 12 22:51:35.896162 containerd[1577]: time="2025-12-12T22:51:35.896137972Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:35.898372 containerd[1577]: time="2025-12-12T22:51:35.898319277Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:35.898923 containerd[1577]: time="2025-12-12T22:51:35.898903612Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.415388025s" Dec 12 22:51:35.898992 containerd[1577]: time="2025-12-12T22:51:35.898929185Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 22:51:35.904294 containerd[1577]: time="2025-12-12T22:51:35.904179883Z" level=info msg="CreateContainer within sandbox \"6e1cac764f998d7df3376eb036f343a3ee9e38f5c01b2e5c8fcf1929683807f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 22:51:35.910280 containerd[1577]: time="2025-12-12T22:51:35.910244913Z" level=info msg="Container f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:35.916284 containerd[1577]: time="2025-12-12T22:51:35.916244590Z" level=info msg="CreateContainer within sandbox \"6e1cac764f998d7df3376eb036f343a3ee9e38f5c01b2e5c8fcf1929683807f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d\"" Dec 12 22:51:35.917545 containerd[1577]: time="2025-12-12T22:51:35.916648314Z" level=info msg="StartContainer for \"f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d\"" Dec 12 22:51:35.917545 containerd[1577]: time="2025-12-12T22:51:35.917389490Z" level=info msg="connecting to shim f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d" address="unix:///run/containerd/s/b03a60dc4987f0676f40935107e3dee6439029ad6c1f797c0f4ff66b5a9d765c" protocol=ttrpc version=3 Dec 12 22:51:35.944731 systemd[1]: Started cri-containerd-f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d.scope - libcontainer container f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d. Dec 12 22:51:35.954000 audit: BPF prog-id=150 op=LOAD Dec 12 22:51:35.955000 audit: BPF prog-id=151 op=LOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.955000 audit: BPF prog-id=151 op=UNLOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.955000 audit: BPF prog-id=152 op=LOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.955000 audit: BPF prog-id=153 op=LOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.955000 audit: BPF prog-id=153 op=UNLOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.955000 audit: BPF prog-id=152 op=UNLOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.955000 audit: BPF prog-id=154 op=LOAD Dec 12 22:51:35.955000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2792 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:35.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396530376566363138343462346262383233653666623430613533 Dec 12 22:51:35.974272 containerd[1577]: time="2025-12-12T22:51:35.974176114Z" level=info msg="StartContainer for \"f99e07ef61844b4bb823e6fb40a53faa385d0b8a3ad148f0dff047269582337d\" returns successfully" Dec 12 22:51:36.788620 kubelet[2729]: E1212 22:51:36.788571 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:36.798684 kubelet[2729]: E1212 22:51:36.798625 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:36.809006 kubelet[2729]: I1212 22:51:36.808930 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xwjzp" podStartSLOduration=3.808914008 podStartE2EDuration="3.808914008s" podCreationTimestamp="2025-12-12 22:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 22:51:35.80363527 +0000 UTC m=+8.148224759" watchObservedRunningTime="2025-12-12 22:51:36.808914008 +0000 UTC m=+9.153503537" Dec 12 22:51:36.821225 kubelet[2729]: I1212 22:51:36.821162 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4tl6n" podStartSLOduration=1.4016562129999999 podStartE2EDuration="2.821141819s" podCreationTimestamp="2025-12-12 22:51:34 +0000 UTC" firstStartedPulling="2025-12-12 22:51:34.482615345 +0000 UTC m=+6.827204874" lastFinishedPulling="2025-12-12 22:51:35.902100951 +0000 UTC m=+8.246690480" observedRunningTime="2025-12-12 22:51:36.80877182 +0000 UTC m=+9.153361349" watchObservedRunningTime="2025-12-12 22:51:36.821141819 +0000 UTC m=+9.165731348" Dec 12 22:51:37.807885 kubelet[2729]: E1212 22:51:37.807497 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:37.837444 kubelet[2729]: E1212 22:51:37.837395 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:38.808696 kubelet[2729]: E1212 22:51:38.808510 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:41.180346 kubelet[2729]: E1212 22:51:41.179321 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:41.644333 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 22:51:41.644444 kernel: audit: type=1106 audit(1765579901.639:527): pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:41.639000 audit[1799]: USER_END pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:41.640264 sudo[1799]: pam_unix(sudo:session): session closed for user root Dec 12 22:51:41.648973 sshd[1798]: Connection closed by 10.0.0.1 port 34552 Dec 12 22:51:41.648139 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Dec 12 22:51:41.654837 kernel: audit: type=1104 audit(1765579901.639:528): pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:41.639000 audit[1799]: CRED_DISP pid=1799 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 22:51:41.650000 audit[1794]: USER_END pid=1794 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:41.660421 kernel: audit: type=1106 audit(1765579901.650:529): pid=1794 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:41.657825 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 22:51:41.659674 systemd[1]: session-8.scope: Consumed 6.479s CPU time, 189.8M memory peak. Dec 12 22:51:41.660834 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:34552.service: Deactivated successfully. Dec 12 22:51:41.650000 audit[1794]: CRED_DISP pid=1794 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:41.667590 kernel: audit: type=1104 audit(1765579901.650:530): pid=1794 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:51:41.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.28:22-10.0.0.1:34552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:41.668505 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Dec 12 22:51:41.673580 kernel: audit: type=1131 audit(1765579901.660:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.28:22-10.0.0.1:34552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:51:41.673628 systemd-logind[1556]: Removed session 8. Dec 12 22:51:42.045000 audit[3136]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:42.045000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcb0c3ce0 a2=0 a3=1 items=0 ppid=2887 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:42.049569 kernel: audit: type=1325 audit(1765579902.045:532): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:42.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:42.055819 kernel: audit: type=1300 audit(1765579902.045:532): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcb0c3ce0 a2=0 a3=1 items=0 ppid=2887 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:42.055946 kernel: audit: type=1327 audit(1765579902.045:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:42.063874 kernel: audit: type=1325 audit(1765579902.057:533): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:42.063986 kernel: audit: type=1300 audit(1765579902.057:533): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcb0c3ce0 a2=0 a3=1 items=0 ppid=2887 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:42.057000 audit[3136]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:42.057000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcb0c3ce0 a2=0 a3=1 items=0 ppid=2887 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:42.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:42.077000 audit[3138]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:42.077000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd220f0f0 a2=0 a3=1 items=0 ppid=2887 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:42.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:42.081000 audit[3138]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:42.081000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd220f0f0 a2=0 a3=1 items=0 ppid=2887 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:42.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:45.516616 update_engine[1559]: I20251212 22:51:45.516548 1559 update_attempter.cc:509] Updating boot flags... Dec 12 22:51:46.218000 audit[3156]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:46.218000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe4712da0 a2=0 a3=1 items=0 ppid=2887 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:46.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:46.223000 audit[3156]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:46.223000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe4712da0 a2=0 a3=1 items=0 ppid=2887 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:46.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:46.273000 audit[3158]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:46.273000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff86567d0 a2=0 a3=1 items=0 ppid=2887 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:46.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:46.282000 audit[3158]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:46.282000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff86567d0 a2=0 a3=1 items=0 ppid=2887 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:46.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:47.295000 audit[3161]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:47.300604 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 12 22:51:47.300684 kernel: audit: type=1325 audit(1765579907.295:540): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:47.300705 kernel: audit: type=1300 audit(1765579907.295:540): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd732fc90 a2=0 a3=1 items=0 ppid=2887 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:47.295000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd732fc90 a2=0 a3=1 items=0 ppid=2887 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:47.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:47.306268 kernel: audit: type=1327 audit(1765579907.295:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:47.308000 audit[3161]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:47.308000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd732fc90 a2=0 a3=1 items=0 ppid=2887 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:47.316167 kernel: audit: type=1325 audit(1765579907.308:541): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:47.316249 kernel: audit: type=1300 audit(1765579907.308:541): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd732fc90 a2=0 a3=1 items=0 ppid=2887 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:47.316278 kernel: audit: type=1327 audit(1765579907.308:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:47.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:48.720000 audit[3163]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:48.720000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe48ddd0 a2=0 a3=1 items=0 ppid=2887 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:48.728449 kernel: audit: type=1325 audit(1765579908.720:542): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:48.728644 kernel: audit: type=1300 audit(1765579908.720:542): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe48ddd0 a2=0 a3=1 items=0 ppid=2887 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:48.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:48.732381 kernel: audit: type=1327 audit(1765579908.720:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:48.735000 audit[3163]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:48.735000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe48ddd0 a2=0 a3=1 items=0 ppid=2887 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:48.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:48.740649 kernel: audit: type=1325 audit(1765579908.735:543): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:48.768394 systemd[1]: Created slice kubepods-besteffort-poda31c31d6_1731_428a_b783_e11509eebe5d.slice - libcontainer container kubepods-besteffort-poda31c31d6_1731_428a_b783_e11509eebe5d.slice. Dec 12 22:51:48.857251 kubelet[2729]: I1212 22:51:48.857170 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31c31d6-1731-428a-b783-e11509eebe5d-tigera-ca-bundle\") pod \"calico-typha-7cf5977748-nrrm7\" (UID: \"a31c31d6-1731-428a-b783-e11509eebe5d\") " pod="calico-system/calico-typha-7cf5977748-nrrm7" Dec 12 22:51:48.857251 kubelet[2729]: I1212 22:51:48.857221 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a31c31d6-1731-428a-b783-e11509eebe5d-typha-certs\") pod \"calico-typha-7cf5977748-nrrm7\" (UID: \"a31c31d6-1731-428a-b783-e11509eebe5d\") " pod="calico-system/calico-typha-7cf5977748-nrrm7" Dec 12 22:51:48.857251 kubelet[2729]: I1212 22:51:48.857241 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2grq\" (UniqueName: \"kubernetes.io/projected/a31c31d6-1731-428a-b783-e11509eebe5d-kube-api-access-g2grq\") pod \"calico-typha-7cf5977748-nrrm7\" (UID: \"a31c31d6-1731-428a-b783-e11509eebe5d\") " pod="calico-system/calico-typha-7cf5977748-nrrm7" Dec 12 22:51:48.935337 systemd[1]: Created slice kubepods-besteffort-podb6826d5a_4664_4855_b9f8_0eed8b069a49.slice - libcontainer container kubepods-besteffort-podb6826d5a_4664_4855_b9f8_0eed8b069a49.slice. Dec 12 22:51:48.957818 kubelet[2729]: I1212 22:51:48.957742 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-cni-log-dir\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.957818 kubelet[2729]: I1212 22:51:48.957796 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-policysync\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.957818 kubelet[2729]: I1212 22:51:48.957814 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-xtables-lock\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958010 kubelet[2729]: I1212 22:51:48.957842 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-flexvol-driver-host\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958010 kubelet[2729]: I1212 22:51:48.957869 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-cni-bin-dir\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958010 kubelet[2729]: I1212 22:51:48.957883 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-lib-modules\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958010 kubelet[2729]: I1212 22:51:48.957897 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6826d5a-4664-4855-b9f8-0eed8b069a49-node-certs\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958010 kubelet[2729]: I1212 22:51:48.957912 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-var-run-calico\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958111 kubelet[2729]: I1212 22:51:48.957927 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-var-lib-calico\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958111 kubelet[2729]: I1212 22:51:48.957941 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7bv\" (UniqueName: \"kubernetes.io/projected/b6826d5a-4664-4855-b9f8-0eed8b069a49-kube-api-access-kn7bv\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958111 kubelet[2729]: I1212 22:51:48.957969 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6826d5a-4664-4855-b9f8-0eed8b069a49-cni-net-dir\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:48.958111 kubelet[2729]: I1212 22:51:48.957984 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6826d5a-4664-4855-b9f8-0eed8b069a49-tigera-ca-bundle\") pod \"calico-node-mt69f\" (UID: \"b6826d5a-4664-4855-b9f8-0eed8b069a49\") " pod="calico-system/calico-node-mt69f" Dec 12 22:51:49.068271 kubelet[2729]: E1212 22:51:49.066696 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.068271 kubelet[2729]: W1212 22:51:49.066719 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.068271 kubelet[2729]: E1212 22:51:49.066750 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.070013 kubelet[2729]: E1212 22:51:49.069993 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.070013 kubelet[2729]: W1212 22:51:49.070011 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.070103 kubelet[2729]: E1212 22:51:49.070025 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.073190 kubelet[2729]: E1212 22:51:49.073134 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:49.073788 containerd[1577]: time="2025-12-12T22:51:49.073736779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf5977748-nrrm7,Uid:a31c31d6-1731-428a-b783-e11509eebe5d,Namespace:calico-system,Attempt:0,}" Dec 12 22:51:49.101378 containerd[1577]: time="2025-12-12T22:51:49.101320832Z" level=info msg="connecting to shim 7de728ddd9f5920f95c560d81a36a1e4f1fab8e751da3351d05e23e1b202ef4d" address="unix:///run/containerd/s/0e346b62b409caa9788173928741fc0f44677bde797e309905e3eb403f3f9cf0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:49.128326 kubelet[2729]: E1212 22:51:49.128281 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:51:49.147180 systemd[1]: Started cri-containerd-7de728ddd9f5920f95c560d81a36a1e4f1fab8e751da3351d05e23e1b202ef4d.scope - libcontainer container 7de728ddd9f5920f95c560d81a36a1e4f1fab8e751da3351d05e23e1b202ef4d. Dec 12 22:51:49.155754 kubelet[2729]: E1212 22:51:49.155694 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.155973 kubelet[2729]: W1212 22:51:49.155850 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.155973 kubelet[2729]: E1212 22:51:49.155876 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.156230 kubelet[2729]: E1212 22:51:49.156216 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.156415 kubelet[2729]: W1212 22:51:49.156291 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.156415 kubelet[2729]: E1212 22:51:49.156377 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.156693 kubelet[2729]: E1212 22:51:49.156680 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.156693 kubelet[2729]: W1212 22:51:49.156718 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.156693 kubelet[2729]: E1212 22:51:49.156731 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.157121 kubelet[2729]: E1212 22:51:49.157065 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.157121 kubelet[2729]: W1212 22:51:49.157078 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.157121 kubelet[2729]: E1212 22:51:49.157089 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.157447 kubelet[2729]: E1212 22:51:49.157418 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.157535 kubelet[2729]: W1212 22:51:49.157430 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.157670 kubelet[2729]: E1212 22:51:49.157592 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.157849 kubelet[2729]: E1212 22:51:49.157813 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.158003 kubelet[2729]: W1212 22:51:49.157929 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.158003 kubelet[2729]: E1212 22:51:49.157946 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.158277 kubelet[2729]: E1212 22:51:49.158264 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.158430 kubelet[2729]: W1212 22:51:49.158329 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.158430 kubelet[2729]: E1212 22:51:49.158344 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.158795 kubelet[2729]: E1212 22:51:49.158665 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.158795 kubelet[2729]: W1212 22:51:49.158679 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.158795 kubelet[2729]: E1212 22:51:49.158689 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.159012 kubelet[2729]: E1212 22:51:49.158999 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.159194 kubelet[2729]: W1212 22:51:49.159072 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.159194 kubelet[2729]: E1212 22:51:49.159087 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.159304 kubelet[2729]: E1212 22:51:49.159292 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.159362 kubelet[2729]: W1212 22:51:49.159350 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.159416 kubelet[2729]: E1212 22:51:49.159407 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.159000 audit: BPF prog-id=155 op=LOAD Dec 12 22:51:49.159990 kubelet[2729]: E1212 22:51:49.159694 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.159990 kubelet[2729]: W1212 22:51:49.159707 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.159990 kubelet[2729]: E1212 22:51:49.159717 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.159000 audit: BPF prog-id=156 op=LOAD Dec 12 22:51:49.159000 audit[3190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000168180 a2=98 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.159000 audit: BPF prog-id=156 op=UNLOAD Dec 12 22:51:49.159000 audit[3190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.159000 audit: BPF prog-id=157 op=LOAD Dec 12 22:51:49.159000 audit[3190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001683e8 a2=98 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.160000 audit: BPF prog-id=158 op=LOAD Dec 12 22:51:49.160000 audit[3190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000168168 a2=98 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.160000 audit: BPF prog-id=158 op=UNLOAD Dec 12 22:51:49.160000 audit[3190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.160000 audit: BPF prog-id=157 op=UNLOAD Dec 12 22:51:49.160000 audit[3190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.160000 audit: BPF prog-id=159 op=LOAD Dec 12 22:51:49.160000 audit[3190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000168648 a2=98 a3=0 items=0 ppid=3180 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764653732386464643966353932306639356335363064383161333661 Dec 12 22:51:49.161653 kubelet[2729]: E1212 22:51:49.160208 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.161653 kubelet[2729]: W1212 22:51:49.160223 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.161653 kubelet[2729]: E1212 22:51:49.160234 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.161653 kubelet[2729]: E1212 22:51:49.160958 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.161653 kubelet[2729]: W1212 22:51:49.160970 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.161653 kubelet[2729]: E1212 22:51:49.161000 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.161653 kubelet[2729]: E1212 22:51:49.161290 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.161653 kubelet[2729]: W1212 22:51:49.161300 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.161653 kubelet[2729]: E1212 22:51:49.161314 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.161969 kubelet[2729]: E1212 22:51:49.161953 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.162022 kubelet[2729]: W1212 22:51:49.162011 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.162170 kubelet[2729]: E1212 22:51:49.162063 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.162438 kubelet[2729]: E1212 22:51:49.162411 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.162649 kubelet[2729]: W1212 22:51:49.162515 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.162752 kubelet[2729]: E1212 22:51:49.162737 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.163074 kubelet[2729]: E1212 22:51:49.163061 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.163221 kubelet[2729]: W1212 22:51:49.163133 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.163221 kubelet[2729]: E1212 22:51:49.163148 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.163416 kubelet[2729]: E1212 22:51:49.163404 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.163504 kubelet[2729]: W1212 22:51:49.163491 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.163586 kubelet[2729]: E1212 22:51:49.163574 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.163952 kubelet[2729]: E1212 22:51:49.163900 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.163952 kubelet[2729]: W1212 22:51:49.163914 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.163952 kubelet[2729]: E1212 22:51:49.163924 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.164275 kubelet[2729]: E1212 22:51:49.164257 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.164352 kubelet[2729]: W1212 22:51:49.164338 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.164419 kubelet[2729]: E1212 22:51:49.164393 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.164891 kubelet[2729]: E1212 22:51:49.164872 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.165098 kubelet[2729]: W1212 22:51:49.164953 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.165098 kubelet[2729]: E1212 22:51:49.164971 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.165098 kubelet[2729]: I1212 22:51:49.165003 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3344cb69-6333-4eee-b470-ee8fe022cd55-kubelet-dir\") pod \"csi-node-driver-nknqg\" (UID: \"3344cb69-6333-4eee-b470-ee8fe022cd55\") " pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:49.165254 kubelet[2729]: E1212 22:51:49.165240 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.165313 kubelet[2729]: W1212 22:51:49.165302 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.165381 kubelet[2729]: E1212 22:51:49.165370 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.165447 kubelet[2729]: I1212 22:51:49.165437 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9sf\" (UniqueName: \"kubernetes.io/projected/3344cb69-6333-4eee-b470-ee8fe022cd55-kube-api-access-bn9sf\") pod \"csi-node-driver-nknqg\" (UID: \"3344cb69-6333-4eee-b470-ee8fe022cd55\") " pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:49.165637 kubelet[2729]: E1212 22:51:49.165616 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.165637 kubelet[2729]: W1212 22:51:49.165635 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.165720 kubelet[2729]: E1212 22:51:49.165653 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.166215 kubelet[2729]: E1212 22:51:49.166196 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.166215 kubelet[2729]: W1212 22:51:49.166212 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.166274 kubelet[2729]: E1212 22:51:49.166228 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.166433 kubelet[2729]: E1212 22:51:49.166421 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.166433 kubelet[2729]: W1212 22:51:49.166433 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.166489 kubelet[2729]: E1212 22:51:49.166451 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.166489 kubelet[2729]: I1212 22:51:49.166469 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3344cb69-6333-4eee-b470-ee8fe022cd55-registration-dir\") pod \"csi-node-driver-nknqg\" (UID: \"3344cb69-6333-4eee-b470-ee8fe022cd55\") " pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:49.166707 kubelet[2729]: E1212 22:51:49.166693 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.166766 kubelet[2729]: W1212 22:51:49.166708 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.166766 kubelet[2729]: E1212 22:51:49.166740 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.166877 kubelet[2729]: I1212 22:51:49.166775 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3344cb69-6333-4eee-b470-ee8fe022cd55-socket-dir\") pod \"csi-node-driver-nknqg\" (UID: \"3344cb69-6333-4eee-b470-ee8fe022cd55\") " pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:49.166912 kubelet[2729]: E1212 22:51:49.166900 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.166912 kubelet[2729]: W1212 22:51:49.166908 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.167022 kubelet[2729]: E1212 22:51:49.166944 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.167044 kubelet[2729]: E1212 22:51:49.167039 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.167168 kubelet[2729]: W1212 22:51:49.167047 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.167168 kubelet[2729]: E1212 22:51:49.167061 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.167359 kubelet[2729]: E1212 22:51:49.167207 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.167359 kubelet[2729]: W1212 22:51:49.167214 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.167359 kubelet[2729]: E1212 22:51:49.167222 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.167359 kubelet[2729]: I1212 22:51:49.167238 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3344cb69-6333-4eee-b470-ee8fe022cd55-varrun\") pod \"csi-node-driver-nknqg\" (UID: \"3344cb69-6333-4eee-b470-ee8fe022cd55\") " pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:49.167359 kubelet[2729]: E1212 22:51:49.167385 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.167359 kubelet[2729]: W1212 22:51:49.167393 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.167359 kubelet[2729]: E1212 22:51:49.167407 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.167773 kubelet[2729]: E1212 22:51:49.167720 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.167773 kubelet[2729]: W1212 22:51:49.167734 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.167773 kubelet[2729]: E1212 22:51:49.167746 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.168056 kubelet[2729]: E1212 22:51:49.168037 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.168056 kubelet[2729]: W1212 22:51:49.168051 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.168056 kubelet[2729]: E1212 22:51:49.168067 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.168394 kubelet[2729]: E1212 22:51:49.168365 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.168457 kubelet[2729]: W1212 22:51:49.168444 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.168556 kubelet[2729]: E1212 22:51:49.168512 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.168899 kubelet[2729]: E1212 22:51:49.168779 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.168899 kubelet[2729]: W1212 22:51:49.168793 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.168899 kubelet[2729]: E1212 22:51:49.168802 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.169049 kubelet[2729]: E1212 22:51:49.169036 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.169096 kubelet[2729]: W1212 22:51:49.169086 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.169142 kubelet[2729]: E1212 22:51:49.169133 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.184444 containerd[1577]: time="2025-12-12T22:51:49.184402189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf5977748-nrrm7,Uid:a31c31d6-1731-428a-b783-e11509eebe5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7de728ddd9f5920f95c560d81a36a1e4f1fab8e751da3351d05e23e1b202ef4d\"" Dec 12 22:51:49.187307 kubelet[2729]: E1212 22:51:49.187282 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:49.188306 containerd[1577]: time="2025-12-12T22:51:49.188273049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 22:51:49.237505 kubelet[2729]: E1212 22:51:49.237472 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:49.238737 containerd[1577]: time="2025-12-12T22:51:49.237993192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mt69f,Uid:b6826d5a-4664-4855-b9f8-0eed8b069a49,Namespace:calico-system,Attempt:0,}" Dec 12 22:51:49.261572 containerd[1577]: time="2025-12-12T22:51:49.261480090Z" level=info msg="connecting to shim db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94" address="unix:///run/containerd/s/3c96800bba57a328c1566d3d96d7836b930a764292cdd6c1147be2ddbb6433a1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:51:49.270156 kubelet[2729]: E1212 22:51:49.270112 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.270156 kubelet[2729]: W1212 22:51:49.270138 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.270156 kubelet[2729]: E1212 22:51:49.270159 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.270967 kubelet[2729]: E1212 22:51:49.270363 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.270967 kubelet[2729]: W1212 22:51:49.270373 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.270967 kubelet[2729]: E1212 22:51:49.270387 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.271343 kubelet[2729]: E1212 22:51:49.271231 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.271343 kubelet[2729]: W1212 22:51:49.271255 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.271343 kubelet[2729]: E1212 22:51:49.271277 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.271649 kubelet[2729]: E1212 22:51:49.271633 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.271834 kubelet[2729]: W1212 22:51:49.271707 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.271834 kubelet[2729]: E1212 22:51:49.271737 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.272233 kubelet[2729]: E1212 22:51:49.272132 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.272233 kubelet[2729]: W1212 22:51:49.272155 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.272419 kubelet[2729]: E1212 22:51:49.272373 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.272617 kubelet[2729]: E1212 22:51:49.272507 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.272617 kubelet[2729]: W1212 22:51:49.272548 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.272899 kubelet[2729]: E1212 22:51:49.272774 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.273729 kubelet[2729]: E1212 22:51:49.273644 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.273729 kubelet[2729]: W1212 22:51:49.273659 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.273729 kubelet[2729]: E1212 22:51:49.273697 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.274257 kubelet[2729]: E1212 22:51:49.273852 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.274257 kubelet[2729]: W1212 22:51:49.273866 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.274257 kubelet[2729]: E1212 22:51:49.274002 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.274257 kubelet[2729]: W1212 22:51:49.274010 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.274257 kubelet[2729]: E1212 22:51:49.274139 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.274257 kubelet[2729]: W1212 22:51:49.274147 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.274434 kubelet[2729]: E1212 22:51:49.274314 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.274434 kubelet[2729]: W1212 22:51:49.274321 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.274434 kubelet[2729]: E1212 22:51:49.274332 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.274561 kubelet[2729]: E1212 22:51:49.274476 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.274561 kubelet[2729]: W1212 22:51:49.274485 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.274561 kubelet[2729]: E1212 22:51:49.274493 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.274729 kubelet[2729]: E1212 22:51:49.274655 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.274729 kubelet[2729]: E1212 22:51:49.274690 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.274822 kubelet[2729]: E1212 22:51:49.274700 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.276261 kubelet[2729]: E1212 22:51:49.275757 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.276261 kubelet[2729]: W1212 22:51:49.276257 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.276337 kubelet[2729]: E1212 22:51:49.276284 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.276545 kubelet[2729]: E1212 22:51:49.276516 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.276545 kubelet[2729]: W1212 22:51:49.276544 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.276643 kubelet[2729]: E1212 22:51:49.276601 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.277146 kubelet[2729]: E1212 22:51:49.277097 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.277146 kubelet[2729]: W1212 22:51:49.277116 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.277436 kubelet[2729]: E1212 22:51:49.277413 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.277436 kubelet[2729]: W1212 22:51:49.277435 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.277758 kubelet[2729]: E1212 22:51:49.277453 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.277758 kubelet[2729]: E1212 22:51:49.277140 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.277758 kubelet[2729]: E1212 22:51:49.277683 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.277883 kubelet[2729]: W1212 22:51:49.277761 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.277883 kubelet[2729]: E1212 22:51:49.277777 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.278001 kubelet[2729]: E1212 22:51:49.277983 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.278001 kubelet[2729]: W1212 22:51:49.277998 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.278101 kubelet[2729]: E1212 22:51:49.278014 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.278555 kubelet[2729]: E1212 22:51:49.278148 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.278555 kubelet[2729]: W1212 22:51:49.278157 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.278555 kubelet[2729]: E1212 22:51:49.278170 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.278555 kubelet[2729]: E1212 22:51:49.278335 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.278555 kubelet[2729]: W1212 22:51:49.278345 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.278555 kubelet[2729]: E1212 22:51:49.278391 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.278555 kubelet[2729]: E1212 22:51:49.278500 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.278555 kubelet[2729]: W1212 22:51:49.278510 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.278741 kubelet[2729]: E1212 22:51:49.278564 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.278741 kubelet[2729]: E1212 22:51:49.278709 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.278741 kubelet[2729]: W1212 22:51:49.278719 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.278741 kubelet[2729]: E1212 22:51:49.278728 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.278878 kubelet[2729]: E1212 22:51:49.278858 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.278878 kubelet[2729]: W1212 22:51:49.278869 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.278878 kubelet[2729]: E1212 22:51:49.278877 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.279223 kubelet[2729]: E1212 22:51:49.279182 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.279223 kubelet[2729]: W1212 22:51:49.279209 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.279223 kubelet[2729]: E1212 22:51:49.279228 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.279480 kubelet[2729]: E1212 22:51:49.279461 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.279480 kubelet[2729]: W1212 22:51:49.279475 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.279623 kubelet[2729]: E1212 22:51:49.279485 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.288784 kubelet[2729]: E1212 22:51:49.288746 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:49.288784 kubelet[2729]: W1212 22:51:49.288768 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:49.288784 kubelet[2729]: E1212 22:51:49.288786 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:49.289747 systemd[1]: Started cri-containerd-db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94.scope - libcontainer container db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94. Dec 12 22:51:49.301000 audit: BPF prog-id=160 op=LOAD Dec 12 22:51:49.301000 audit: BPF prog-id=161 op=LOAD Dec 12 22:51:49.301000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.301000 audit: BPF prog-id=161 op=UNLOAD Dec 12 22:51:49.301000 audit[3283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.302000 audit: BPF prog-id=162 op=LOAD Dec 12 22:51:49.302000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.302000 audit: BPF prog-id=163 op=LOAD Dec 12 22:51:49.302000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.302000 audit: BPF prog-id=163 op=UNLOAD Dec 12 22:51:49.302000 audit[3283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.302000 audit: BPF prog-id=162 op=UNLOAD Dec 12 22:51:49.302000 audit[3283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.303000 audit: BPF prog-id=164 op=LOAD Dec 12 22:51:49.303000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462346334303231653761363464316539303930356665663537383663 Dec 12 22:51:49.317174 containerd[1577]: time="2025-12-12T22:51:49.317137274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mt69f,Uid:b6826d5a-4664-4855-b9f8-0eed8b069a49,Namespace:calico-system,Attempt:0,} returns sandbox id \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\"" Dec 12 22:51:49.322785 kubelet[2729]: E1212 22:51:49.320779 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:49.749000 audit[3337]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:49.749000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffce625ae0 a2=0 a3=1 items=0 ppid=2887 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:49.754000 audit[3337]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:51:49.754000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce625ae0 a2=0 a3=1 items=0 ppid=2887 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:49.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:51:50.059020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475819418.mount: Deactivated successfully. Dec 12 22:51:50.376088 containerd[1577]: time="2025-12-12T22:51:50.374750250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:50.378437 containerd[1577]: time="2025-12-12T22:51:50.378380330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33087431" Dec 12 22:51:50.379536 containerd[1577]: time="2025-12-12T22:51:50.379474583Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:50.381687 containerd[1577]: time="2025-12-12T22:51:50.381623241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:50.382551 containerd[1577]: time="2025-12-12T22:51:50.382079266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.193767728s" Dec 12 22:51:50.382551 containerd[1577]: time="2025-12-12T22:51:50.382116275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 22:51:50.384678 containerd[1577]: time="2025-12-12T22:51:50.384650501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 22:51:50.400954 containerd[1577]: time="2025-12-12T22:51:50.400914865Z" level=info msg="CreateContainer within sandbox \"7de728ddd9f5920f95c560d81a36a1e4f1fab8e751da3351d05e23e1b202ef4d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 22:51:50.409776 containerd[1577]: time="2025-12-12T22:51:50.409740907Z" level=info msg="Container cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:50.419571 containerd[1577]: time="2025-12-12T22:51:50.419503206Z" level=info msg="CreateContainer within sandbox \"7de728ddd9f5920f95c560d81a36a1e4f1fab8e751da3351d05e23e1b202ef4d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6\"" Dec 12 22:51:50.419983 containerd[1577]: time="2025-12-12T22:51:50.419945149Z" level=info msg="StartContainer for \"cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6\"" Dec 12 22:51:50.421011 containerd[1577]: time="2025-12-12T22:51:50.420986670Z" level=info msg="connecting to shim cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6" address="unix:///run/containerd/s/0e346b62b409caa9788173928741fc0f44677bde797e309905e3eb403f3f9cf0" protocol=ttrpc version=3 Dec 12 22:51:50.443797 systemd[1]: Started cri-containerd-cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6.scope - libcontainer container cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6. Dec 12 22:51:50.456000 audit: BPF prog-id=165 op=LOAD Dec 12 22:51:50.457000 audit: BPF prog-id=166 op=LOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.457000 audit: BPF prog-id=166 op=UNLOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.457000 audit: BPF prog-id=167 op=LOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.457000 audit: BPF prog-id=168 op=LOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.457000 audit: BPF prog-id=168 op=UNLOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.457000 audit: BPF prog-id=167 op=UNLOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.457000 audit: BPF prog-id=169 op=LOAD Dec 12 22:51:50.457000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3180 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:50.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386561336630346133373130333431333230333766366436656135 Dec 12 22:51:50.514578 containerd[1577]: time="2025-12-12T22:51:50.514540318Z" level=info msg="StartContainer for \"cf8ea3f04a371034132037f6d6ea53b64ca0aec8df13bfade122d449e1bb08e6\" returns successfully" Dec 12 22:51:50.741421 kubelet[2729]: E1212 22:51:50.741301 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:51:50.836789 kubelet[2729]: E1212 22:51:50.836759 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:50.852153 kubelet[2729]: I1212 22:51:50.852089 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf5977748-nrrm7" podStartSLOduration=1.6556531190000001 podStartE2EDuration="2.852074145s" podCreationTimestamp="2025-12-12 22:51:48 +0000 UTC" firstStartedPulling="2025-12-12 22:51:49.187979897 +0000 UTC m=+21.532569426" lastFinishedPulling="2025-12-12 22:51:50.384400923 +0000 UTC m=+22.728990452" observedRunningTime="2025-12-12 22:51:50.849947533 +0000 UTC m=+23.194537102" watchObservedRunningTime="2025-12-12 22:51:50.852074145 +0000 UTC m=+23.196663674" Dec 12 22:51:50.875018 kubelet[2729]: E1212 22:51:50.874988 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.875018 kubelet[2729]: W1212 22:51:50.875012 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.875245 kubelet[2729]: E1212 22:51:50.875033 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.875245 kubelet[2729]: E1212 22:51:50.875198 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.875245 kubelet[2729]: W1212 22:51:50.875207 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.875342 kubelet[2729]: E1212 22:51:50.875254 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.875422 kubelet[2729]: E1212 22:51:50.875411 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.875452 kubelet[2729]: W1212 22:51:50.875422 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.875452 kubelet[2729]: E1212 22:51:50.875432 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.875594 kubelet[2729]: E1212 22:51:50.875582 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.875594 kubelet[2729]: W1212 22:51:50.875593 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.875674 kubelet[2729]: E1212 22:51:50.875602 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.875790 kubelet[2729]: E1212 22:51:50.875780 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.875790 kubelet[2729]: W1212 22:51:50.875791 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.875867 kubelet[2729]: E1212 22:51:50.875799 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.875933 kubelet[2729]: E1212 22:51:50.875923 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.875933 kubelet[2729]: W1212 22:51:50.875933 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876001 kubelet[2729]: E1212 22:51:50.875942 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876070 kubelet[2729]: E1212 22:51:50.876057 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876070 kubelet[2729]: W1212 22:51:50.876066 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876140 kubelet[2729]: E1212 22:51:50.876074 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876202 kubelet[2729]: E1212 22:51:50.876191 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876202 kubelet[2729]: W1212 22:51:50.876201 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876333 kubelet[2729]: E1212 22:51:50.876209 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876358 kubelet[2729]: E1212 22:51:50.876342 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876358 kubelet[2729]: W1212 22:51:50.876349 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876358 kubelet[2729]: E1212 22:51:50.876357 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876488 kubelet[2729]: E1212 22:51:50.876479 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876488 kubelet[2729]: W1212 22:51:50.876488 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876585 kubelet[2729]: E1212 22:51:50.876495 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876634 kubelet[2729]: E1212 22:51:50.876620 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876634 kubelet[2729]: W1212 22:51:50.876630 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876707 kubelet[2729]: E1212 22:51:50.876637 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876766 kubelet[2729]: E1212 22:51:50.876755 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876766 kubelet[2729]: W1212 22:51:50.876764 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876834 kubelet[2729]: E1212 22:51:50.876772 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.876903 kubelet[2729]: E1212 22:51:50.876892 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.876903 kubelet[2729]: W1212 22:51:50.876901 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.876981 kubelet[2729]: E1212 22:51:50.876909 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.877032 kubelet[2729]: E1212 22:51:50.877022 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.877032 kubelet[2729]: W1212 22:51:50.877031 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.877108 kubelet[2729]: E1212 22:51:50.877038 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.877173 kubelet[2729]: E1212 22:51:50.877162 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.877173 kubelet[2729]: W1212 22:51:50.877172 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.877241 kubelet[2729]: E1212 22:51:50.877184 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.889755 kubelet[2729]: E1212 22:51:50.889670 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.889755 kubelet[2729]: W1212 22:51:50.889700 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.889755 kubelet[2729]: E1212 22:51:50.889715 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.889935 kubelet[2729]: E1212 22:51:50.889922 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.889935 kubelet[2729]: W1212 22:51:50.889932 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.890237 kubelet[2729]: E1212 22:51:50.889946 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.890237 kubelet[2729]: E1212 22:51:50.890155 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.890237 kubelet[2729]: W1212 22:51:50.890171 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.890237 kubelet[2729]: E1212 22:51:50.890190 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.890430 kubelet[2729]: E1212 22:51:50.890406 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.890457 kubelet[2729]: W1212 22:51:50.890430 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.890457 kubelet[2729]: E1212 22:51:50.890446 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.890923 kubelet[2729]: E1212 22:51:50.890887 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.890923 kubelet[2729]: W1212 22:51:50.890902 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.890923 kubelet[2729]: E1212 22:51:50.890920 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.891172 kubelet[2729]: E1212 22:51:50.891158 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.891172 kubelet[2729]: W1212 22:51:50.891170 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.891240 kubelet[2729]: E1212 22:51:50.891197 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.891361 kubelet[2729]: E1212 22:51:50.891347 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.891444 kubelet[2729]: W1212 22:51:50.891363 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.891444 kubelet[2729]: E1212 22:51:50.891397 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.891513 kubelet[2729]: E1212 22:51:50.891499 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.891623 kubelet[2729]: W1212 22:51:50.891510 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.891623 kubelet[2729]: E1212 22:51:50.891576 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.891784 kubelet[2729]: E1212 22:51:50.891767 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.891784 kubelet[2729]: W1212 22:51:50.891780 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.891837 kubelet[2729]: E1212 22:51:50.891815 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.892022 kubelet[2729]: E1212 22:51:50.892008 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.892063 kubelet[2729]: W1212 22:51:50.892021 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.892063 kubelet[2729]: E1212 22:51:50.892053 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.892454 kubelet[2729]: E1212 22:51:50.892346 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.892454 kubelet[2729]: W1212 22:51:50.892361 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.892454 kubelet[2729]: E1212 22:51:50.892378 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.892617 kubelet[2729]: E1212 22:51:50.892604 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.892668 kubelet[2729]: W1212 22:51:50.892656 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.892747 kubelet[2729]: E1212 22:51:50.892733 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.893149 kubelet[2729]: E1212 22:51:50.892980 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.893149 kubelet[2729]: W1212 22:51:50.892992 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.893149 kubelet[2729]: E1212 22:51:50.893008 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.893391 kubelet[2729]: E1212 22:51:50.893371 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.893431 kubelet[2729]: W1212 22:51:50.893391 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.893431 kubelet[2729]: E1212 22:51:50.893411 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.893796 kubelet[2729]: E1212 22:51:50.893713 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.893796 kubelet[2729]: W1212 22:51:50.893787 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.893862 kubelet[2729]: E1212 22:51:50.893806 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.894400 kubelet[2729]: E1212 22:51:50.894380 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.894400 kubelet[2729]: W1212 22:51:50.894398 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.894487 kubelet[2729]: E1212 22:51:50.894444 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.894930 kubelet[2729]: E1212 22:51:50.894867 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.894930 kubelet[2729]: W1212 22:51:50.894882 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.894930 kubelet[2729]: E1212 22:51:50.894893 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:50.895625 kubelet[2729]: E1212 22:51:50.895592 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 22:51:50.895625 kubelet[2729]: W1212 22:51:50.895609 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 22:51:50.895625 kubelet[2729]: E1212 22:51:50.895622 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 22:51:51.370996 containerd[1577]: time="2025-12-12T22:51:51.370748004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:51.372035 containerd[1577]: time="2025-12-12T22:51:51.371364700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Dec 12 22:51:51.374927 containerd[1577]: time="2025-12-12T22:51:51.374887879Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:51.380561 containerd[1577]: time="2025-12-12T22:51:51.380355206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:51.381234 containerd[1577]: time="2025-12-12T22:51:51.381119175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 996.431505ms" Dec 12 22:51:51.381234 containerd[1577]: time="2025-12-12T22:51:51.381154903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 22:51:51.384556 containerd[1577]: time="2025-12-12T22:51:51.384344567Z" level=info msg="CreateContainer within sandbox \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 22:51:51.398751 containerd[1577]: time="2025-12-12T22:51:51.397572649Z" level=info msg="Container 89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:51.410471 containerd[1577]: time="2025-12-12T22:51:51.410424648Z" level=info msg="CreateContainer within sandbox \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19\"" Dec 12 22:51:51.412215 containerd[1577]: time="2025-12-12T22:51:51.412170074Z" level=info msg="StartContainer for \"89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19\"" Dec 12 22:51:51.414028 containerd[1577]: time="2025-12-12T22:51:51.414001238Z" level=info msg="connecting to shim 89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19" address="unix:///run/containerd/s/3c96800bba57a328c1566d3d96d7836b930a764292cdd6c1147be2ddbb6433a1" protocol=ttrpc version=3 Dec 12 22:51:51.436741 systemd[1]: Started cri-containerd-89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19.scope - libcontainer container 89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19. Dec 12 22:51:51.498000 audit: BPF prog-id=170 op=LOAD Dec 12 22:51:51.498000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3272 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:51.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373234333231653137633337646634346533303764346562326464 Dec 12 22:51:51.499000 audit: BPF prog-id=171 op=LOAD Dec 12 22:51:51.499000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3272 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:51.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373234333231653137633337646634346533303764346562326464 Dec 12 22:51:51.499000 audit: BPF prog-id=171 op=UNLOAD Dec 12 22:51:51.499000 audit[3425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:51.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373234333231653137633337646634346533303764346562326464 Dec 12 22:51:51.499000 audit: BPF prog-id=170 op=UNLOAD Dec 12 22:51:51.499000 audit[3425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:51.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373234333231653137633337646634346533303764346562326464 Dec 12 22:51:51.499000 audit: BPF prog-id=172 op=LOAD Dec 12 22:51:51.499000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3272 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:51.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373234333231653137633337646634346533303764346562326464 Dec 12 22:51:51.519577 containerd[1577]: time="2025-12-12T22:51:51.518602223Z" level=info msg="StartContainer for \"89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19\" returns successfully" Dec 12 22:51:51.532505 systemd[1]: cri-containerd-89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19.scope: Deactivated successfully. Dec 12 22:51:51.533159 systemd[1]: cri-containerd-89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19.scope: Consumed 34ms CPU time, 6.3M memory peak, 2.6M written to disk. Dec 12 22:51:51.536843 containerd[1577]: time="2025-12-12T22:51:51.536806284Z" level=info msg="received container exit event container_id:\"89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19\" id:\"89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19\" pid:3438 exited_at:{seconds:1765579911 nanos:536182826}" Dec 12 22:51:51.536000 audit: BPF prog-id=172 op=UNLOAD Dec 12 22:51:51.565644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89724321e17c37df44e307d4eb2ddc77872fc6a9cc9596ebc1c4e78dbf767d19-rootfs.mount: Deactivated successfully. Dec 12 22:51:51.840191 kubelet[2729]: I1212 22:51:51.840140 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 22:51:51.840615 kubelet[2729]: E1212 22:51:51.840406 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:51.840615 kubelet[2729]: E1212 22:51:51.840405 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:51.841814 containerd[1577]: time="2025-12-12T22:51:51.841774326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 22:51:52.742147 kubelet[2729]: E1212 22:51:52.742080 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:51:54.434233 containerd[1577]: time="2025-12-12T22:51:54.434104554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:54.435603 containerd[1577]: time="2025-12-12T22:51:54.435554394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 22:51:54.436373 containerd[1577]: time="2025-12-12T22:51:54.436349148Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:54.438591 containerd[1577]: time="2025-12-12T22:51:54.438541731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:54.439185 containerd[1577]: time="2025-12-12T22:51:54.439149569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.597336634s" Dec 12 22:51:54.439185 containerd[1577]: time="2025-12-12T22:51:54.439182215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 22:51:54.441021 containerd[1577]: time="2025-12-12T22:51:54.440988444Z" level=info msg="CreateContainer within sandbox \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 22:51:54.450696 containerd[1577]: time="2025-12-12T22:51:54.449011713Z" level=info msg="Container 9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:54.455806 containerd[1577]: time="2025-12-12T22:51:54.455757496Z" level=info msg="CreateContainer within sandbox \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56\"" Dec 12 22:51:54.456206 containerd[1577]: time="2025-12-12T22:51:54.456178777Z" level=info msg="StartContainer for \"9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56\"" Dec 12 22:51:54.460116 containerd[1577]: time="2025-12-12T22:51:54.460079970Z" level=info msg="connecting to shim 9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56" address="unix:///run/containerd/s/3c96800bba57a328c1566d3d96d7836b930a764292cdd6c1147be2ddbb6433a1" protocol=ttrpc version=3 Dec 12 22:51:54.480713 systemd[1]: Started cri-containerd-9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56.scope - libcontainer container 9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56. Dec 12 22:51:54.531000 audit: BPF prog-id=173 op=LOAD Dec 12 22:51:54.534179 kernel: kauditd_printk_skb: 90 callbacks suppressed Dec 12 22:51:54.534235 kernel: audit: type=1334 audit(1765579914.531:576): prog-id=173 op=LOAD Dec 12 22:51:54.534268 kernel: audit: type=1300 audit(1765579914.531:576): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.540715 kernel: audit: type=1327 audit(1765579914.531:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.531000 audit: BPF prog-id=174 op=LOAD Dec 12 22:51:54.541551 kernel: audit: type=1334 audit(1765579914.531:577): prog-id=174 op=LOAD Dec 12 22:51:54.541590 kernel: audit: type=1300 audit(1765579914.531:577): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.548220 kernel: audit: type=1327 audit(1765579914.531:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.531000 audit: BPF prog-id=174 op=UNLOAD Dec 12 22:51:54.549145 kernel: audit: type=1334 audit(1765579914.531:578): prog-id=174 op=UNLOAD Dec 12 22:51:54.531000 audit[3485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.552389 kernel: audit: type=1300 audit(1765579914.531:578): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.556181 kernel: audit: type=1327 audit(1765579914.531:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.556244 kernel: audit: type=1334 audit(1765579914.531:579): prog-id=173 op=UNLOAD Dec 12 22:51:54.531000 audit: BPF prog-id=173 op=UNLOAD Dec 12 22:51:54.531000 audit[3485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.531000 audit: BPF prog-id=175 op=LOAD Dec 12 22:51:54.531000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3272 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643461643561343837666566623338303034623734383233616465 Dec 12 22:51:54.573588 containerd[1577]: time="2025-12-12T22:51:54.573447582Z" level=info msg="StartContainer for \"9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56\" returns successfully" Dec 12 22:51:54.741862 kubelet[2729]: E1212 22:51:54.741731 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:51:54.850127 kubelet[2729]: E1212 22:51:54.850087 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:55.112340 containerd[1577]: time="2025-12-12T22:51:55.112219602Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 22:51:55.114834 systemd[1]: cri-containerd-9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56.scope: Deactivated successfully. Dec 12 22:51:55.115337 systemd[1]: cri-containerd-9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56.scope: Consumed 463ms CPU time, 175.8M memory peak, 2.3M read from disk, 165.9M written to disk. Dec 12 22:51:55.117931 containerd[1577]: time="2025-12-12T22:51:55.117896253Z" level=info msg="received container exit event container_id:\"9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56\" id:\"9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56\" pid:3498 exited_at:{seconds:1765579915 nanos:117673611}" Dec 12 22:51:55.120000 audit: BPF prog-id=175 op=UNLOAD Dec 12 22:51:55.138018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bd4ad5a487fefb38004b74823adea336bef822c705b08785b1b6226e7283a56-rootfs.mount: Deactivated successfully. Dec 12 22:51:55.138960 kubelet[2729]: I1212 22:51:55.138929 2729 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 22:51:55.196183 systemd[1]: Created slice kubepods-burstable-pod74b7771b_32af_40f6_97b9_bf5ee39960ad.slice - libcontainer container kubepods-burstable-pod74b7771b_32af_40f6_97b9_bf5ee39960ad.slice. Dec 12 22:51:55.207732 systemd[1]: Created slice kubepods-besteffort-pod4785c37b_156a_4faf_8cfd_f73b6f7355f4.slice - libcontainer container kubepods-besteffort-pod4785c37b_156a_4faf_8cfd_f73b6f7355f4.slice. Dec 12 22:51:55.222546 kubelet[2729]: I1212 22:51:55.222188 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74b7771b-32af-40f6-97b9-bf5ee39960ad-config-volume\") pod \"coredns-668d6bf9bc-cxx5s\" (UID: \"74b7771b-32af-40f6-97b9-bf5ee39960ad\") " pod="kube-system/coredns-668d6bf9bc-cxx5s" Dec 12 22:51:55.222546 kubelet[2729]: I1212 22:51:55.222255 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2jd\" (UniqueName: \"kubernetes.io/projected/4785c37b-156a-4faf-8cfd-f73b6f7355f4-kube-api-access-4f2jd\") pod \"calico-kube-controllers-6689b476fb-cz8sh\" (UID: \"4785c37b-156a-4faf-8cfd-f73b6f7355f4\") " pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" Dec 12 22:51:55.222546 kubelet[2729]: I1212 22:51:55.222284 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e402111-6f81-4248-9211-701497715292-config\") pod \"goldmane-666569f655-ls8tz\" (UID: \"0e402111-6f81-4248-9211-701497715292\") " pod="calico-system/goldmane-666569f655-ls8tz" Dec 12 22:51:55.222546 kubelet[2729]: I1212 22:51:55.222377 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a816068e-6aeb-4536-8ece-56ad68b4e384-calico-apiserver-certs\") pod \"calico-apiserver-756859cb7d-ctm57\" (UID: \"a816068e-6aeb-4536-8ece-56ad68b4e384\") " pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" Dec 12 22:51:55.222546 kubelet[2729]: I1212 22:51:55.222400 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e402111-6f81-4248-9211-701497715292-goldmane-ca-bundle\") pod \"goldmane-666569f655-ls8tz\" (UID: \"0e402111-6f81-4248-9211-701497715292\") " pod="calico-system/goldmane-666569f655-ls8tz" Dec 12 22:51:55.222790 kubelet[2729]: I1212 22:51:55.222419 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22da6c49-6a9b-4270-91c0-2f3cf459c08b-calico-apiserver-certs\") pod \"calico-apiserver-756859cb7d-lb748\" (UID: \"22da6c49-6a9b-4270-91c0-2f3cf459c08b\") " pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" Dec 12 22:51:55.222790 kubelet[2729]: I1212 22:51:55.222442 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-backend-key-pair\") pod \"whisker-5c4fcfbb5b-fkvtt\" (UID: \"5faa9323-ead8-4dd6-8bad-100205ae39ca\") " pod="calico-system/whisker-5c4fcfbb5b-fkvtt" Dec 12 22:51:55.222790 kubelet[2729]: I1212 22:51:55.222539 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-ca-bundle\") pod \"whisker-5c4fcfbb5b-fkvtt\" (UID: \"5faa9323-ead8-4dd6-8bad-100205ae39ca\") " pod="calico-system/whisker-5c4fcfbb5b-fkvtt" Dec 12 22:51:55.222790 kubelet[2729]: I1212 22:51:55.222574 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a9aa8b-84da-46fd-aac6-c85c27e3b277-config-volume\") pod \"coredns-668d6bf9bc-v6p5n\" (UID: \"98a9aa8b-84da-46fd-aac6-c85c27e3b277\") " pod="kube-system/coredns-668d6bf9bc-v6p5n" Dec 12 22:51:55.222790 kubelet[2729]: I1212 22:51:55.222612 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4785c37b-156a-4faf-8cfd-f73b6f7355f4-tigera-ca-bundle\") pod \"calico-kube-controllers-6689b476fb-cz8sh\" (UID: \"4785c37b-156a-4faf-8cfd-f73b6f7355f4\") " pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" Dec 12 22:51:55.222903 kubelet[2729]: I1212 22:51:55.222633 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0e402111-6f81-4248-9211-701497715292-goldmane-key-pair\") pod \"goldmane-666569f655-ls8tz\" (UID: \"0e402111-6f81-4248-9211-701497715292\") " pod="calico-system/goldmane-666569f655-ls8tz" Dec 12 22:51:55.222903 kubelet[2729]: I1212 22:51:55.222683 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p526l\" (UniqueName: \"kubernetes.io/projected/0e402111-6f81-4248-9211-701497715292-kube-api-access-p526l\") pod \"goldmane-666569f655-ls8tz\" (UID: \"0e402111-6f81-4248-9211-701497715292\") " pod="calico-system/goldmane-666569f655-ls8tz" Dec 12 22:51:55.222903 kubelet[2729]: I1212 22:51:55.222707 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqclc\" (UniqueName: \"kubernetes.io/projected/74b7771b-32af-40f6-97b9-bf5ee39960ad-kube-api-access-fqclc\") pod \"coredns-668d6bf9bc-cxx5s\" (UID: \"74b7771b-32af-40f6-97b9-bf5ee39960ad\") " pod="kube-system/coredns-668d6bf9bc-cxx5s" Dec 12 22:51:55.222903 kubelet[2729]: I1212 22:51:55.222734 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4cw\" (UniqueName: \"kubernetes.io/projected/22da6c49-6a9b-4270-91c0-2f3cf459c08b-kube-api-access-sl4cw\") pod \"calico-apiserver-756859cb7d-lb748\" (UID: \"22da6c49-6a9b-4270-91c0-2f3cf459c08b\") " pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" Dec 12 22:51:55.222903 kubelet[2729]: I1212 22:51:55.222758 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5h9m\" (UniqueName: \"kubernetes.io/projected/a816068e-6aeb-4536-8ece-56ad68b4e384-kube-api-access-b5h9m\") pod \"calico-apiserver-756859cb7d-ctm57\" (UID: \"a816068e-6aeb-4536-8ece-56ad68b4e384\") " pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" Dec 12 22:51:55.223015 kubelet[2729]: I1212 22:51:55.222780 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrh4\" (UniqueName: \"kubernetes.io/projected/5faa9323-ead8-4dd6-8bad-100205ae39ca-kube-api-access-gfrh4\") pod \"whisker-5c4fcfbb5b-fkvtt\" (UID: \"5faa9323-ead8-4dd6-8bad-100205ae39ca\") " pod="calico-system/whisker-5c4fcfbb5b-fkvtt" Dec 12 22:51:55.223015 kubelet[2729]: I1212 22:51:55.222801 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4thx\" (UniqueName: \"kubernetes.io/projected/98a9aa8b-84da-46fd-aac6-c85c27e3b277-kube-api-access-l4thx\") pod \"coredns-668d6bf9bc-v6p5n\" (UID: \"98a9aa8b-84da-46fd-aac6-c85c27e3b277\") " pod="kube-system/coredns-668d6bf9bc-v6p5n" Dec 12 22:51:55.227961 systemd[1]: Created slice kubepods-burstable-pod98a9aa8b_84da_46fd_aac6_c85c27e3b277.slice - libcontainer container kubepods-burstable-pod98a9aa8b_84da_46fd_aac6_c85c27e3b277.slice. Dec 12 22:51:55.235556 systemd[1]: Created slice kubepods-besteffort-pod22da6c49_6a9b_4270_91c0_2f3cf459c08b.slice - libcontainer container kubepods-besteffort-pod22da6c49_6a9b_4270_91c0_2f3cf459c08b.slice. Dec 12 22:51:55.242623 systemd[1]: Created slice kubepods-besteffort-pod5faa9323_ead8_4dd6_8bad_100205ae39ca.slice - libcontainer container kubepods-besteffort-pod5faa9323_ead8_4dd6_8bad_100205ae39ca.slice. Dec 12 22:51:55.250069 systemd[1]: Created slice kubepods-besteffort-pod0e402111_6f81_4248_9211_701497715292.slice - libcontainer container kubepods-besteffort-pod0e402111_6f81_4248_9211_701497715292.slice. Dec 12 22:51:55.257648 systemd[1]: Created slice kubepods-besteffort-poda816068e_6aeb_4536_8ece_56ad68b4e384.slice - libcontainer container kubepods-besteffort-poda816068e_6aeb_4536_8ece_56ad68b4e384.slice. Dec 12 22:51:55.504035 kubelet[2729]: E1212 22:51:55.504006 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:55.504690 containerd[1577]: time="2025-12-12T22:51:55.504652114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxx5s,Uid:74b7771b-32af-40f6-97b9-bf5ee39960ad,Namespace:kube-system,Attempt:0,}" Dec 12 22:51:55.526689 containerd[1577]: time="2025-12-12T22:51:55.526625819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689b476fb-cz8sh,Uid:4785c37b-156a-4faf-8cfd-f73b6f7355f4,Namespace:calico-system,Attempt:0,}" Dec 12 22:51:55.532319 kubelet[2729]: E1212 22:51:55.532274 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:55.532962 containerd[1577]: time="2025-12-12T22:51:55.532918943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v6p5n,Uid:98a9aa8b-84da-46fd-aac6-c85c27e3b277,Namespace:kube-system,Attempt:0,}" Dec 12 22:51:55.539039 containerd[1577]: time="2025-12-12T22:51:55.539002588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-lb748,Uid:22da6c49-6a9b-4270-91c0-2f3cf459c08b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 22:51:55.546200 containerd[1577]: time="2025-12-12T22:51:55.546107262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4fcfbb5b-fkvtt,Uid:5faa9323-ead8-4dd6-8bad-100205ae39ca,Namespace:calico-system,Attempt:0,}" Dec 12 22:51:55.554607 containerd[1577]: time="2025-12-12T22:51:55.554443964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls8tz,Uid:0e402111-6f81-4248-9211-701497715292,Namespace:calico-system,Attempt:0,}" Dec 12 22:51:55.560625 containerd[1577]: time="2025-12-12T22:51:55.560372101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-ctm57,Uid:a816068e-6aeb-4536-8ece-56ad68b4e384,Namespace:calico-apiserver,Attempt:0,}" Dec 12 22:51:55.649030 containerd[1577]: time="2025-12-12T22:51:55.648965049Z" level=error msg="Failed to destroy network for sandbox \"e87b978bc30f62b923e641877843f6713fc694ad07e13922e9e4e2c282fbe86e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.649535 containerd[1577]: time="2025-12-12T22:51:55.649430215Z" level=error msg="Failed to destroy network for sandbox \"23cff3a830b6ef0bbf3e368b276bfcd499ea62a9eabfe3729a5f7898ae9b6cd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.651625 containerd[1577]: time="2025-12-12T22:51:55.651576612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689b476fb-cz8sh,Uid:4785c37b-156a-4faf-8cfd-f73b6f7355f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87b978bc30f62b923e641877843f6713fc694ad07e13922e9e4e2c282fbe86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.653729 containerd[1577]: time="2025-12-12T22:51:55.653680721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v6p5n,Uid:98a9aa8b-84da-46fd-aac6-c85c27e3b277,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cff3a830b6ef0bbf3e368b276bfcd499ea62a9eabfe3729a5f7898ae9b6cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.654298 kubelet[2729]: E1212 22:51:55.654256 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cff3a830b6ef0bbf3e368b276bfcd499ea62a9eabfe3729a5f7898ae9b6cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.654359 kubelet[2729]: E1212 22:51:55.654328 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cff3a830b6ef0bbf3e368b276bfcd499ea62a9eabfe3729a5f7898ae9b6cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v6p5n" Dec 12 22:51:55.654359 kubelet[2729]: E1212 22:51:55.654347 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cff3a830b6ef0bbf3e368b276bfcd499ea62a9eabfe3729a5f7898ae9b6cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v6p5n" Dec 12 22:51:55.654416 kubelet[2729]: E1212 22:51:55.654386 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v6p5n_kube-system(98a9aa8b-84da-46fd-aac6-c85c27e3b277)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v6p5n_kube-system(98a9aa8b-84da-46fd-aac6-c85c27e3b277)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23cff3a830b6ef0bbf3e368b276bfcd499ea62a9eabfe3729a5f7898ae9b6cd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v6p5n" podUID="98a9aa8b-84da-46fd-aac6-c85c27e3b277" Dec 12 22:51:55.656453 kubelet[2729]: E1212 22:51:55.656412 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87b978bc30f62b923e641877843f6713fc694ad07e13922e9e4e2c282fbe86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.656510 kubelet[2729]: E1212 22:51:55.656477 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87b978bc30f62b923e641877843f6713fc694ad07e13922e9e4e2c282fbe86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" Dec 12 22:51:55.656510 kubelet[2729]: E1212 22:51:55.656497 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87b978bc30f62b923e641877843f6713fc694ad07e13922e9e4e2c282fbe86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" Dec 12 22:51:55.656569 kubelet[2729]: E1212 22:51:55.656546 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6689b476fb-cz8sh_calico-system(4785c37b-156a-4faf-8cfd-f73b6f7355f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6689b476fb-cz8sh_calico-system(4785c37b-156a-4faf-8cfd-f73b6f7355f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e87b978bc30f62b923e641877843f6713fc694ad07e13922e9e4e2c282fbe86e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:51:55.657738 containerd[1577]: time="2025-12-12T22:51:55.657704465Z" level=error msg="Failed to destroy network for sandbox \"9cdf4b7609dbdf3903322ab1a2ae627942eda66be7626c57f059c6255d3e37ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.659442 containerd[1577]: time="2025-12-12T22:51:55.659340248Z" level=error msg="Failed to destroy network for sandbox \"25b7b37bf464002e2aac71053799c7de120d75f45847e228756c95747600bea1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.659884 containerd[1577]: time="2025-12-12T22:51:55.659855223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxx5s,Uid:74b7771b-32af-40f6-97b9-bf5ee39960ad,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdf4b7609dbdf3903322ab1a2ae627942eda66be7626c57f059c6255d3e37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.660627 kubelet[2729]: E1212 22:51:55.660425 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdf4b7609dbdf3903322ab1a2ae627942eda66be7626c57f059c6255d3e37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.660729 kubelet[2729]: E1212 22:51:55.660662 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdf4b7609dbdf3903322ab1a2ae627942eda66be7626c57f059c6255d3e37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cxx5s" Dec 12 22:51:55.660729 kubelet[2729]: E1212 22:51:55.660682 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdf4b7609dbdf3903322ab1a2ae627942eda66be7626c57f059c6255d3e37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cxx5s" Dec 12 22:51:55.660781 kubelet[2729]: E1212 22:51:55.660728 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cxx5s_kube-system(74b7771b-32af-40f6-97b9-bf5ee39960ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cxx5s_kube-system(74b7771b-32af-40f6-97b9-bf5ee39960ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cdf4b7609dbdf3903322ab1a2ae627942eda66be7626c57f059c6255d3e37ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cxx5s" podUID="74b7771b-32af-40f6-97b9-bf5ee39960ad" Dec 12 22:51:55.661867 containerd[1577]: time="2025-12-12T22:51:55.661824588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4fcfbb5b-fkvtt,Uid:5faa9323-ead8-4dd6-8bad-100205ae39ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b7b37bf464002e2aac71053799c7de120d75f45847e228756c95747600bea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.662206 kubelet[2729]: E1212 22:51:55.662179 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b7b37bf464002e2aac71053799c7de120d75f45847e228756c95747600bea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.662278 kubelet[2729]: E1212 22:51:55.662219 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b7b37bf464002e2aac71053799c7de120d75f45847e228756c95747600bea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c4fcfbb5b-fkvtt" Dec 12 22:51:55.662278 kubelet[2729]: E1212 22:51:55.662236 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b7b37bf464002e2aac71053799c7de120d75f45847e228756c95747600bea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c4fcfbb5b-fkvtt" Dec 12 22:51:55.662278 kubelet[2729]: E1212 22:51:55.662268 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c4fcfbb5b-fkvtt_calico-system(5faa9323-ead8-4dd6-8bad-100205ae39ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c4fcfbb5b-fkvtt_calico-system(5faa9323-ead8-4dd6-8bad-100205ae39ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25b7b37bf464002e2aac71053799c7de120d75f45847e228756c95747600bea1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c4fcfbb5b-fkvtt" podUID="5faa9323-ead8-4dd6-8bad-100205ae39ca" Dec 12 22:51:55.676687 containerd[1577]: time="2025-12-12T22:51:55.676637728Z" level=error msg="Failed to destroy network for sandbox \"f2e53cb5229529ca9f3a2b24a0917501814705a8b814fc48f10a0cd971de1921\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.679407 containerd[1577]: time="2025-12-12T22:51:55.678653541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-lb748,Uid:22da6c49-6a9b-4270-91c0-2f3cf459c08b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e53cb5229529ca9f3a2b24a0917501814705a8b814fc48f10a0cd971de1921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.679541 kubelet[2729]: E1212 22:51:55.678884 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e53cb5229529ca9f3a2b24a0917501814705a8b814fc48f10a0cd971de1921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.679541 kubelet[2729]: E1212 22:51:55.678938 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e53cb5229529ca9f3a2b24a0917501814705a8b814fc48f10a0cd971de1921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" Dec 12 22:51:55.679541 kubelet[2729]: E1212 22:51:55.678972 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e53cb5229529ca9f3a2b24a0917501814705a8b814fc48f10a0cd971de1921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" Dec 12 22:51:55.679650 kubelet[2729]: E1212 22:51:55.679016 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-756859cb7d-lb748_calico-apiserver(22da6c49-6a9b-4270-91c0-2f3cf459c08b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-756859cb7d-lb748_calico-apiserver(22da6c49-6a9b-4270-91c0-2f3cf459c08b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2e53cb5229529ca9f3a2b24a0917501814705a8b814fc48f10a0cd971de1921\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:51:55.679703 containerd[1577]: time="2025-12-12T22:51:55.679653205Z" level=error msg="Failed to destroy network for sandbox \"e206ad9f587d75fbba553f6439f0149529d38c8e158356e452c6cd878f2f6c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.679962 containerd[1577]: time="2025-12-12T22:51:55.679931937Z" level=error msg="Failed to destroy network for sandbox \"e5331ae9574b791046fdfeddb3e8c4dcde987a44d46ea62130a58665ac01146d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.681851 containerd[1577]: time="2025-12-12T22:51:55.681764036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-ctm57,Uid:a816068e-6aeb-4536-8ece-56ad68b4e384,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206ad9f587d75fbba553f6439f0149529d38c8e158356e452c6cd878f2f6c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.682074 kubelet[2729]: E1212 22:51:55.682015 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206ad9f587d75fbba553f6439f0149529d38c8e158356e452c6cd878f2f6c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.682131 kubelet[2729]: E1212 22:51:55.682094 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206ad9f587d75fbba553f6439f0149529d38c8e158356e452c6cd878f2f6c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" Dec 12 22:51:55.682131 kubelet[2729]: E1212 22:51:55.682115 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206ad9f587d75fbba553f6439f0149529d38c8e158356e452c6cd878f2f6c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" Dec 12 22:51:55.682253 kubelet[2729]: E1212 22:51:55.682158 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-756859cb7d-ctm57_calico-apiserver(a816068e-6aeb-4536-8ece-56ad68b4e384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-756859cb7d-ctm57_calico-apiserver(a816068e-6aeb-4536-8ece-56ad68b4e384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e206ad9f587d75fbba553f6439f0149529d38c8e158356e452c6cd878f2f6c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:51:55.684748 containerd[1577]: time="2025-12-12T22:51:55.684634687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls8tz,Uid:0e402111-6f81-4248-9211-701497715292,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5331ae9574b791046fdfeddb3e8c4dcde987a44d46ea62130a58665ac01146d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.684845 kubelet[2729]: E1212 22:51:55.684811 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5331ae9574b791046fdfeddb3e8c4dcde987a44d46ea62130a58665ac01146d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:55.684944 kubelet[2729]: E1212 22:51:55.684856 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5331ae9574b791046fdfeddb3e8c4dcde987a44d46ea62130a58665ac01146d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ls8tz" Dec 12 22:51:55.684944 kubelet[2729]: E1212 22:51:55.684873 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5331ae9574b791046fdfeddb3e8c4dcde987a44d46ea62130a58665ac01146d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ls8tz" Dec 12 22:51:55.685045 kubelet[2729]: E1212 22:51:55.684977 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ls8tz_calico-system(0e402111-6f81-4248-9211-701497715292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ls8tz_calico-system(0e402111-6f81-4248-9211-701497715292)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5331ae9574b791046fdfeddb3e8c4dcde987a44d46ea62130a58665ac01146d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:51:55.856055 kubelet[2729]: E1212 22:51:55.855946 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:55.857465 containerd[1577]: time="2025-12-12T22:51:55.857275582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 22:51:56.754398 systemd[1]: Created slice kubepods-besteffort-pod3344cb69_6333_4eee_b470_ee8fe022cd55.slice - libcontainer container kubepods-besteffort-pod3344cb69_6333_4eee_b470_ee8fe022cd55.slice. Dec 12 22:51:56.769713 containerd[1577]: time="2025-12-12T22:51:56.769664584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nknqg,Uid:3344cb69-6333-4eee-b470-ee8fe022cd55,Namespace:calico-system,Attempt:0,}" Dec 12 22:51:56.886896 containerd[1577]: time="2025-12-12T22:51:56.886807280Z" level=error msg="Failed to destroy network for sandbox \"72d73beb988bc61a9eaa8ed8e1f3f028c13cc4c1413a9cff850b9ecab14b4671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:56.889319 systemd[1]: run-netns-cni\x2d5fa1c690\x2d16f1\x2d969f\x2da078\x2dfc71ad564918.mount: Deactivated successfully. Dec 12 22:51:56.939105 containerd[1577]: time="2025-12-12T22:51:56.938918122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nknqg,Uid:3344cb69-6333-4eee-b470-ee8fe022cd55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d73beb988bc61a9eaa8ed8e1f3f028c13cc4c1413a9cff850b9ecab14b4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:56.939430 kubelet[2729]: E1212 22:51:56.939362 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d73beb988bc61a9eaa8ed8e1f3f028c13cc4c1413a9cff850b9ecab14b4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 22:51:56.939430 kubelet[2729]: E1212 22:51:56.939430 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d73beb988bc61a9eaa8ed8e1f3f028c13cc4c1413a9cff850b9ecab14b4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:56.939805 kubelet[2729]: E1212 22:51:56.939450 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d73beb988bc61a9eaa8ed8e1f3f028c13cc4c1413a9cff850b9ecab14b4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nknqg" Dec 12 22:51:56.939805 kubelet[2729]: E1212 22:51:56.939498 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72d73beb988bc61a9eaa8ed8e1f3f028c13cc4c1413a9cff850b9ecab14b4671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:51:58.654603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679443769.mount: Deactivated successfully. Dec 12 22:51:58.919576 containerd[1577]: time="2025-12-12T22:51:58.919394037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:58.920219 containerd[1577]: time="2025-12-12T22:51:58.920176205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 22:51:58.920997 containerd[1577]: time="2025-12-12T22:51:58.920963773Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:58.922964 containerd[1577]: time="2025-12-12T22:51:58.922931095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 22:51:58.923560 containerd[1577]: time="2025-12-12T22:51:58.923365526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.065977283s" Dec 12 22:51:58.923560 containerd[1577]: time="2025-12-12T22:51:58.923394051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 22:51:58.937241 containerd[1577]: time="2025-12-12T22:51:58.936678863Z" level=info msg="CreateContainer within sandbox \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 22:51:58.949890 containerd[1577]: time="2025-12-12T22:51:58.949849937Z" level=info msg="Container 99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:51:58.960325 containerd[1577]: time="2025-12-12T22:51:58.960277922Z" level=info msg="CreateContainer within sandbox \"db4c4021e7a64d1e90905fef5786c4281b4467b9c199dd8551ec0b74bde28b94\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c\"" Dec 12 22:51:58.963440 containerd[1577]: time="2025-12-12T22:51:58.962921994Z" level=info msg="StartContainer for \"99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c\"" Dec 12 22:51:58.965249 containerd[1577]: time="2025-12-12T22:51:58.965222090Z" level=info msg="connecting to shim 99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c" address="unix:///run/containerd/s/3c96800bba57a328c1566d3d96d7836b930a764292cdd6c1147be2ddbb6433a1" protocol=ttrpc version=3 Dec 12 22:51:59.000791 systemd[1]: Started cri-containerd-99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c.scope - libcontainer container 99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c. Dec 12 22:51:59.075000 audit: BPF prog-id=176 op=LOAD Dec 12 22:51:59.075000 audit[3804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3272 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:59.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636436333063373936393531616162663534303337626633363466 Dec 12 22:51:59.075000 audit: BPF prog-id=177 op=LOAD Dec 12 22:51:59.075000 audit[3804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3272 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:59.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636436333063373936393531616162663534303337626633363466 Dec 12 22:51:59.075000 audit: BPF prog-id=177 op=UNLOAD Dec 12 22:51:59.075000 audit[3804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:59.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636436333063373936393531616162663534303337626633363466 Dec 12 22:51:59.075000 audit: BPF prog-id=176 op=UNLOAD Dec 12 22:51:59.075000 audit[3804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:59.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636436333063373936393531616162663534303337626633363466 Dec 12 22:51:59.075000 audit: BPF prog-id=178 op=LOAD Dec 12 22:51:59.075000 audit[3804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3272 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:51:59.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636436333063373936393531616162663534303337626633363466 Dec 12 22:51:59.105127 containerd[1577]: time="2025-12-12T22:51:59.105020414Z" level=info msg="StartContainer for \"99cd630c796951aabf54037bf364ffc1096e0f2a5a358563694b780d7a713d4c\" returns successfully" Dec 12 22:51:59.231061 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 22:51:59.231175 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 22:51:59.456010 kubelet[2729]: I1212 22:51:59.455967 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-ca-bundle\") pod \"5faa9323-ead8-4dd6-8bad-100205ae39ca\" (UID: \"5faa9323-ead8-4dd6-8bad-100205ae39ca\") " Dec 12 22:51:59.456010 kubelet[2729]: I1212 22:51:59.456033 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-backend-key-pair\") pod \"5faa9323-ead8-4dd6-8bad-100205ae39ca\" (UID: \"5faa9323-ead8-4dd6-8bad-100205ae39ca\") " Dec 12 22:51:59.456875 kubelet[2729]: I1212 22:51:59.456056 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfrh4\" (UniqueName: \"kubernetes.io/projected/5faa9323-ead8-4dd6-8bad-100205ae39ca-kube-api-access-gfrh4\") pod \"5faa9323-ead8-4dd6-8bad-100205ae39ca\" (UID: \"5faa9323-ead8-4dd6-8bad-100205ae39ca\") " Dec 12 22:51:59.465183 kubelet[2729]: I1212 22:51:59.464726 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5faa9323-ead8-4dd6-8bad-100205ae39ca" (UID: "5faa9323-ead8-4dd6-8bad-100205ae39ca"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 22:51:59.473378 kubelet[2729]: I1212 22:51:59.473333 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5faa9323-ead8-4dd6-8bad-100205ae39ca-kube-api-access-gfrh4" (OuterVolumeSpecName: "kube-api-access-gfrh4") pod "5faa9323-ead8-4dd6-8bad-100205ae39ca" (UID: "5faa9323-ead8-4dd6-8bad-100205ae39ca"). InnerVolumeSpecName "kube-api-access-gfrh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 22:51:59.473723 kubelet[2729]: I1212 22:51:59.473633 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5faa9323-ead8-4dd6-8bad-100205ae39ca" (UID: "5faa9323-ead8-4dd6-8bad-100205ae39ca"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 22:51:59.557017 kubelet[2729]: I1212 22:51:59.556964 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 22:51:59.557017 kubelet[2729]: I1212 22:51:59.557005 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5faa9323-ead8-4dd6-8bad-100205ae39ca-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 22:51:59.557017 kubelet[2729]: I1212 22:51:59.557016 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfrh4\" (UniqueName: \"kubernetes.io/projected/5faa9323-ead8-4dd6-8bad-100205ae39ca-kube-api-access-gfrh4\") on node \"localhost\" DevicePath \"\"" Dec 12 22:51:59.655945 systemd[1]: var-lib-kubelet-pods-5faa9323\x2dead8\x2d4dd6\x2d8bad\x2d100205ae39ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgfrh4.mount: Deactivated successfully. Dec 12 22:51:59.656038 systemd[1]: var-lib-kubelet-pods-5faa9323\x2dead8\x2d4dd6\x2d8bad\x2d100205ae39ca-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 22:51:59.751240 systemd[1]: Removed slice kubepods-besteffort-pod5faa9323_ead8_4dd6_8bad_100205ae39ca.slice - libcontainer container kubepods-besteffort-pod5faa9323_ead8_4dd6_8bad_100205ae39ca.slice. Dec 12 22:51:59.867773 kubelet[2729]: E1212 22:51:59.867724 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:51:59.887567 kubelet[2729]: I1212 22:51:59.887056 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mt69f" podStartSLOduration=2.280846269 podStartE2EDuration="11.887037299s" podCreationTimestamp="2025-12-12 22:51:48 +0000 UTC" firstStartedPulling="2025-12-12 22:51:49.321379384 +0000 UTC m=+21.665968913" lastFinishedPulling="2025-12-12 22:51:58.927570414 +0000 UTC m=+31.272159943" observedRunningTime="2025-12-12 22:51:59.886549942 +0000 UTC m=+32.231139511" watchObservedRunningTime="2025-12-12 22:51:59.887037299 +0000 UTC m=+32.231626828" Dec 12 22:51:59.940019 systemd[1]: Created slice kubepods-besteffort-pod798aee2d_22cd_4354_84e3_047cb77c02aa.slice - libcontainer container kubepods-besteffort-pod798aee2d_22cd_4354_84e3_047cb77c02aa.slice. Dec 12 22:51:59.960073 kubelet[2729]: I1212 22:51:59.959896 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/798aee2d-22cd-4354-84e3-047cb77c02aa-whisker-backend-key-pair\") pod \"whisker-669f8df78-7hr8p\" (UID: \"798aee2d-22cd-4354-84e3-047cb77c02aa\") " pod="calico-system/whisker-669f8df78-7hr8p" Dec 12 22:51:59.966396 kubelet[2729]: I1212 22:51:59.960484 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/798aee2d-22cd-4354-84e3-047cb77c02aa-whisker-ca-bundle\") pod \"whisker-669f8df78-7hr8p\" (UID: \"798aee2d-22cd-4354-84e3-047cb77c02aa\") " pod="calico-system/whisker-669f8df78-7hr8p" Dec 12 22:51:59.966606 kubelet[2729]: I1212 22:51:59.966582 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5psvm\" (UniqueName: \"kubernetes.io/projected/798aee2d-22cd-4354-84e3-047cb77c02aa-kube-api-access-5psvm\") pod \"whisker-669f8df78-7hr8p\" (UID: \"798aee2d-22cd-4354-84e3-047cb77c02aa\") " pod="calico-system/whisker-669f8df78-7hr8p" Dec 12 22:52:00.244750 containerd[1577]: time="2025-12-12T22:52:00.244705147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669f8df78-7hr8p,Uid:798aee2d-22cd-4354-84e3-047cb77c02aa,Namespace:calico-system,Attempt:0,}" Dec 12 22:52:00.436854 systemd-networkd[1292]: cali2d6333d9d77: Link UP Dec 12 22:52:00.437069 systemd-networkd[1292]: cali2d6333d9d77: Gained carrier Dec 12 22:52:00.454172 containerd[1577]: 2025-12-12 22:52:00.299 [INFO][3895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:00.454172 containerd[1577]: 2025-12-12 22:52:00.329 [INFO][3895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--669f8df78--7hr8p-eth0 whisker-669f8df78- calico-system 798aee2d-22cd-4354-84e3-047cb77c02aa 888 0 2025-12-12 22:51:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:669f8df78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-669f8df78-7hr8p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d6333d9d77 [] [] }} ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-" Dec 12 22:52:00.454172 containerd[1577]: 2025-12-12 22:52:00.329 [INFO][3895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.454172 containerd[1577]: 2025-12-12 22:52:00.388 [INFO][3909] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" HandleID="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Workload="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.388 [INFO][3909] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" HandleID="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Workload="localhost-k8s-whisker--669f8df78--7hr8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-669f8df78-7hr8p", "timestamp":"2025-12-12 22:52:00.388701341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.389 [INFO][3909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.389 [INFO][3909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.389 [INFO][3909] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.402 [INFO][3909] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" host="localhost" Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.409 [INFO][3909] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.413 [INFO][3909] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.415 [INFO][3909] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.417 [INFO][3909] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:00.454370 containerd[1577]: 2025-12-12 22:52:00.417 [INFO][3909] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" host="localhost" Dec 12 22:52:00.454682 containerd[1577]: 2025-12-12 22:52:00.419 [INFO][3909] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4 Dec 12 22:52:00.454682 containerd[1577]: 2025-12-12 22:52:00.422 [INFO][3909] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" host="localhost" Dec 12 22:52:00.454682 containerd[1577]: 2025-12-12 22:52:00.427 [INFO][3909] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" host="localhost" Dec 12 22:52:00.454682 containerd[1577]: 2025-12-12 22:52:00.427 [INFO][3909] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" host="localhost" Dec 12 22:52:00.454682 containerd[1577]: 2025-12-12 22:52:00.427 [INFO][3909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:00.454682 containerd[1577]: 2025-12-12 22:52:00.427 [INFO][3909] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" HandleID="k8s-pod-network.3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Workload="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.454796 containerd[1577]: 2025-12-12 22:52:00.430 [INFO][3895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--669f8df78--7hr8p-eth0", GenerateName:"whisker-669f8df78-", Namespace:"calico-system", SelfLink:"", UID:"798aee2d-22cd-4354-84e3-047cb77c02aa", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669f8df78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-669f8df78-7hr8p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d6333d9d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:00.454796 containerd[1577]: 2025-12-12 22:52:00.430 [INFO][3895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.454870 containerd[1577]: 2025-12-12 22:52:00.430 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d6333d9d77 ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.454870 containerd[1577]: 2025-12-12 22:52:00.437 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.454907 containerd[1577]: 2025-12-12 22:52:00.438 [INFO][3895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--669f8df78--7hr8p-eth0", GenerateName:"whisker-669f8df78-", Namespace:"calico-system", SelfLink:"", UID:"798aee2d-22cd-4354-84e3-047cb77c02aa", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669f8df78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4", Pod:"whisker-669f8df78-7hr8p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d6333d9d77", MAC:"e2:0b:e3:23:63:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:00.454964 containerd[1577]: 2025-12-12 22:52:00.451 [INFO][3895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" Namespace="calico-system" Pod="whisker-669f8df78-7hr8p" WorkloadEndpoint="localhost-k8s-whisker--669f8df78--7hr8p-eth0" Dec 12 22:52:00.487236 containerd[1577]: time="2025-12-12T22:52:00.487176046Z" level=info msg="connecting to shim 3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4" address="unix:///run/containerd/s/f64c545e5fc3a8b0f1f45525af6440957c545f11ee59fd8cbd09a25ed2bd63a3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:00.512749 systemd[1]: Started cri-containerd-3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4.scope - libcontainer container 3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4. Dec 12 22:52:00.524000 audit: BPF prog-id=179 op=LOAD Dec 12 22:52:00.527132 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 12 22:52:00.527171 kernel: audit: type=1334 audit(1765579920.524:587): prog-id=179 op=LOAD Dec 12 22:52:00.526000 audit: BPF prog-id=180 op=LOAD Dec 12 22:52:00.529475 kernel: audit: type=1334 audit(1765579920.526:588): prog-id=180 op=LOAD Dec 12 22:52:00.529552 kernel: audit: type=1300 audit(1765579920.526:588): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.526000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.537995 kernel: audit: type=1327 audit(1765579920.526:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.526000 audit: BPF prog-id=180 op=UNLOAD Dec 12 22:52:00.526000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.543279 kernel: audit: type=1334 audit(1765579920.526:589): prog-id=180 op=UNLOAD Dec 12 22:52:00.543425 kernel: audit: type=1300 audit(1765579920.526:589): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.548435 kernel: audit: type=1327 audit(1765579920.526:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.526000 audit: BPF prog-id=181 op=LOAD Dec 12 22:52:00.550330 kernel: audit: type=1334 audit(1765579920.526:590): prog-id=181 op=LOAD Dec 12 22:52:00.550396 kernel: audit: type=1300 audit(1765579920.526:590): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.526000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.558249 kernel: audit: type=1327 audit(1765579920.526:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.529000 audit: BPF prog-id=182 op=LOAD Dec 12 22:52:00.529000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.531000 audit: BPF prog-id=182 op=UNLOAD Dec 12 22:52:00.531000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.531000 audit: BPF prog-id=181 op=UNLOAD Dec 12 22:52:00.531000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.531000 audit: BPF prog-id=183 op=LOAD Dec 12 22:52:00.531000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3933 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:00.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343031323935633837393666653064326136653961316665656666 Dec 12 22:52:00.562547 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:00.605949 containerd[1577]: time="2025-12-12T22:52:00.605634495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669f8df78-7hr8p,Uid:798aee2d-22cd-4354-84e3-047cb77c02aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a401295c8796fe0d2a6e9a1feeff354868893b6cb4a9fc54050b7012a633fb4\"" Dec 12 22:52:00.609183 containerd[1577]: time="2025-12-12T22:52:00.608973480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 22:52:00.804825 containerd[1577]: time="2025-12-12T22:52:00.804696944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:00.806394 containerd[1577]: time="2025-12-12T22:52:00.806334672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 22:52:00.806459 containerd[1577]: time="2025-12-12T22:52:00.806423005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:00.806790 kubelet[2729]: E1212 22:52:00.806634 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 22:52:00.806790 kubelet[2729]: E1212 22:52:00.806705 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 22:52:00.808751 kubelet[2729]: E1212 22:52:00.808698 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d7e34ffea647f9980c1d6396170425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5psvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-669f8df78-7hr8p_calico-system(798aee2d-22cd-4354-84e3-047cb77c02aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:00.810829 containerd[1577]: time="2025-12-12T22:52:00.810810269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 22:52:00.871590 kubelet[2729]: E1212 22:52:00.871229 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:01.004098 containerd[1577]: time="2025-12-12T22:52:01.004052863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:01.005105 containerd[1577]: time="2025-12-12T22:52:01.005067371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 22:52:01.005231 containerd[1577]: time="2025-12-12T22:52:01.005141222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:01.005381 kubelet[2729]: E1212 22:52:01.005349 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 22:52:01.005501 kubelet[2729]: E1212 22:52:01.005480 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 22:52:01.005771 kubelet[2729]: E1212 22:52:01.005689 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5psvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-669f8df78-7hr8p_calico-system(798aee2d-22cd-4354-84e3-047cb77c02aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:01.007081 kubelet[2729]: E1212 22:52:01.007032 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669f8df78-7hr8p" podUID="798aee2d-22cd-4354-84e3-047cb77c02aa" Dec 12 22:52:01.746391 kubelet[2729]: I1212 22:52:01.746293 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5faa9323-ead8-4dd6-8bad-100205ae39ca" path="/var/lib/kubelet/pods/5faa9323-ead8-4dd6-8bad-100205ae39ca/volumes" Dec 12 22:52:01.875298 kubelet[2729]: E1212 22:52:01.875056 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:01.879685 kubelet[2729]: E1212 22:52:01.879620 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669f8df78-7hr8p" podUID="798aee2d-22cd-4354-84e3-047cb77c02aa" Dec 12 22:52:01.922000 audit[4144]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:01.922000 audit[4144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff6889c70 a2=0 a3=1 items=0 ppid=2887 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:01.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:01.931000 audit[4144]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:01.931000 audit[4144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff6889c70 a2=0 a3=1 items=0 ppid=2887 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:01.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:02.008780 systemd-networkd[1292]: cali2d6333d9d77: Gained IPv6LL Dec 12 22:52:05.107399 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:53302.service - OpenSSH per-connection server daemon (10.0.0.1:53302). Dec 12 22:52:05.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.28:22-10.0.0.1:53302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:05.164000 audit[4226]: USER_ACCT pid=4226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:05.165214 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 53302 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:05.165000 audit[4226]: CRED_ACQ pid=4226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:05.165000 audit[4226]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb7b4f50 a2=3 a3=0 items=0 ppid=1 pid=4226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:05.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:05.167063 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:05.171271 systemd-logind[1556]: New session 9 of user core. Dec 12 22:52:05.180725 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 22:52:05.182000 audit[4226]: USER_START pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:05.183000 audit[4230]: CRED_ACQ pid=4230 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:05.334953 sshd[4230]: Connection closed by 10.0.0.1 port 53302 Dec 12 22:52:05.335663 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:05.336000 audit[4226]: USER_END pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:05.336000 audit[4226]: CRED_DISP pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:05.339971 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:53302.service: Deactivated successfully. Dec 12 22:52:05.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.28:22-10.0.0.1:53302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:05.341859 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 22:52:05.342712 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Dec 12 22:52:05.343913 systemd-logind[1556]: Removed session 9. Dec 12 22:52:07.754546 containerd[1577]: time="2025-12-12T22:52:07.754492897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls8tz,Uid:0e402111-6f81-4248-9211-701497715292,Namespace:calico-system,Attempt:0,}" Dec 12 22:52:07.871474 systemd-networkd[1292]: cali449c9f9be09: Link UP Dec 12 22:52:07.871684 systemd-networkd[1292]: cali449c9f9be09: Gained carrier Dec 12 22:52:07.885231 containerd[1577]: 2025-12-12 22:52:07.782 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:07.885231 containerd[1577]: 2025-12-12 22:52:07.799 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ls8tz-eth0 goldmane-666569f655- calico-system 0e402111-6f81-4248-9211-701497715292 814 0 2025-12-12 22:51:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ls8tz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali449c9f9be09 [] [] }} ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-" Dec 12 22:52:07.885231 containerd[1577]: 2025-12-12 22:52:07.799 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.885231 containerd[1577]: 2025-12-12 22:52:07.828 [INFO][4310] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" HandleID="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Workload="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.828 [INFO][4310] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" HandleID="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Workload="localhost-k8s-goldmane--666569f655--ls8tz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ls8tz", "timestamp":"2025-12-12 22:52:07.828420674 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.828 [INFO][4310] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.828 [INFO][4310] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.828 [INFO][4310] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.839 [INFO][4310] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" host="localhost" Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.846 [INFO][4310] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.851 [INFO][4310] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.854 [INFO][4310] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.856 [INFO][4310] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:07.885545 containerd[1577]: 2025-12-12 22:52:07.856 [INFO][4310] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" host="localhost" Dec 12 22:52:07.885969 containerd[1577]: 2025-12-12 22:52:07.858 [INFO][4310] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13 Dec 12 22:52:07.885969 containerd[1577]: 2025-12-12 22:52:07.861 [INFO][4310] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" host="localhost" Dec 12 22:52:07.885969 containerd[1577]: 2025-12-12 22:52:07.867 [INFO][4310] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" host="localhost" Dec 12 22:52:07.885969 containerd[1577]: 2025-12-12 22:52:07.867 [INFO][4310] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" host="localhost" Dec 12 22:52:07.885969 containerd[1577]: 2025-12-12 22:52:07.867 [INFO][4310] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:07.885969 containerd[1577]: 2025-12-12 22:52:07.867 [INFO][4310] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" HandleID="k8s-pod-network.f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Workload="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.886170 containerd[1577]: 2025-12-12 22:52:07.869 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ls8tz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0e402111-6f81-4248-9211-701497715292", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ls8tz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali449c9f9be09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:07.886170 containerd[1577]: 2025-12-12 22:52:07.869 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.886254 containerd[1577]: 2025-12-12 22:52:07.869 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali449c9f9be09 ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.886254 containerd[1577]: 2025-12-12 22:52:07.871 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.886383 containerd[1577]: 2025-12-12 22:52:07.871 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ls8tz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0e402111-6f81-4248-9211-701497715292", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13", Pod:"goldmane-666569f655-ls8tz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali449c9f9be09", MAC:"2e:c6:5e:ff:d9:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:07.886438 containerd[1577]: 2025-12-12 22:52:07.881 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" Namespace="calico-system" Pod="goldmane-666569f655-ls8tz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ls8tz-eth0" Dec 12 22:52:07.909571 containerd[1577]: time="2025-12-12T22:52:07.909502463Z" level=info msg="connecting to shim f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13" address="unix:///run/containerd/s/92045218c63e2d101e099b729cb452ab9017c1f89a62fbaf1ff0e3dc560b743d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:07.942773 systemd[1]: Started cri-containerd-f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13.scope - libcontainer container f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13. Dec 12 22:52:07.952000 audit: BPF prog-id=184 op=LOAD Dec 12 22:52:07.955576 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 12 22:52:07.955636 kernel: audit: type=1334 audit(1765579927.952:606): prog-id=184 op=LOAD Dec 12 22:52:07.955657 kernel: audit: type=1334 audit(1765579927.952:607): prog-id=185 op=LOAD Dec 12 22:52:07.952000 audit: BPF prog-id=185 op=LOAD Dec 12 22:52:07.952000 audit[4347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.960789 kernel: audit: type=1300 audit(1765579927.952:607): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.960878 kernel: audit: type=1327 audit(1765579927.952:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.954000 audit: BPF prog-id=185 op=UNLOAD Dec 12 22:52:07.965600 kernel: audit: type=1334 audit(1765579927.954:608): prog-id=185 op=UNLOAD Dec 12 22:52:07.965656 kernel: audit: type=1300 audit(1765579927.954:608): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.954000 audit[4347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.965960 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:07.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.973353 kernel: audit: type=1327 audit(1765579927.954:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.973544 kernel: audit: type=1334 audit(1765579927.954:609): prog-id=186 op=LOAD Dec 12 22:52:07.954000 audit: BPF prog-id=186 op=LOAD Dec 12 22:52:07.954000 audit[4347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.977980 kernel: audit: type=1300 audit(1765579927.954:609): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.978067 kernel: audit: type=1327 audit(1765579927.954:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.955000 audit: BPF prog-id=187 op=LOAD Dec 12 22:52:07.955000 audit[4347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.959000 audit: BPF prog-id=187 op=UNLOAD Dec 12 22:52:07.959000 audit[4347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.959000 audit: BPF prog-id=186 op=UNLOAD Dec 12 22:52:07.959000 audit[4347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.959000 audit: BPF prog-id=188 op=LOAD Dec 12 22:52:07.959000 audit[4347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:07.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630376532633865303431316364666238666261366639316163393938 Dec 12 22:52:07.994614 containerd[1577]: time="2025-12-12T22:52:07.994577369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls8tz,Uid:0e402111-6f81-4248-9211-701497715292,Namespace:calico-system,Attempt:0,} returns sandbox id \"f07e2c8e0411cdfb8fba6f91ac9988c66c25aeb650787ff5d5fbec8afddf9a13\"" Dec 12 22:52:07.996664 containerd[1577]: time="2025-12-12T22:52:07.996629774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 22:52:08.198857 containerd[1577]: time="2025-12-12T22:52:08.198808673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:08.200154 containerd[1577]: time="2025-12-12T22:52:08.200049736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 22:52:08.200249 containerd[1577]: time="2025-12-12T22:52:08.200161869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:08.200436 kubelet[2729]: E1212 22:52:08.200403 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 22:52:08.201700 kubelet[2729]: E1212 22:52:08.201500 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 22:52:08.202549 kubelet[2729]: E1212 22:52:08.202161 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p526l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls8tz_calico-system(0e402111-6f81-4248-9211-701497715292): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:08.203480 kubelet[2729]: E1212 22:52:08.203425 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:52:08.742205 kubelet[2729]: E1212 22:52:08.742129 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:08.742205 kubelet[2729]: E1212 22:52:08.742155 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:08.742856 containerd[1577]: time="2025-12-12T22:52:08.742692705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxx5s,Uid:74b7771b-32af-40f6-97b9-bf5ee39960ad,Namespace:kube-system,Attempt:0,}" Dec 12 22:52:08.742856 containerd[1577]: time="2025-12-12T22:52:08.742736990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689b476fb-cz8sh,Uid:4785c37b-156a-4faf-8cfd-f73b6f7355f4,Namespace:calico-system,Attempt:0,}" Dec 12 22:52:08.742856 containerd[1577]: time="2025-12-12T22:52:08.742807719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v6p5n,Uid:98a9aa8b-84da-46fd-aac6-c85c27e3b277,Namespace:kube-system,Attempt:0,}" Dec 12 22:52:08.896789 systemd-networkd[1292]: caliec39ffa140a: Link UP Dec 12 22:52:08.898761 systemd-networkd[1292]: caliec39ffa140a: Gained carrier Dec 12 22:52:08.913975 kubelet[2729]: E1212 22:52:08.913905 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:52:08.914563 containerd[1577]: 2025-12-12 22:52:08.781 [INFO][4397] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:08.914563 containerd[1577]: 2025-12-12 22:52:08.802 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0 coredns-668d6bf9bc- kube-system 98a9aa8b-84da-46fd-aac6-c85c27e3b277 813 0 2025-12-12 22:51:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-v6p5n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliec39ffa140a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-" Dec 12 22:52:08.914563 containerd[1577]: 2025-12-12 22:52:08.802 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.914563 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4444] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" HandleID="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Workload="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4444] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" HandleID="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Workload="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003af250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-v6p5n", "timestamp":"2025-12-12 22:52:08.84407996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4444] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4444] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.855 [INFO][4444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" host="localhost" Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.860 [INFO][4444] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.866 [INFO][4444] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.868 [INFO][4444] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.870 [INFO][4444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:08.914983 containerd[1577]: 2025-12-12 22:52:08.871 [INFO][4444] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" host="localhost" Dec 12 22:52:08.915196 containerd[1577]: 2025-12-12 22:52:08.873 [INFO][4444] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef Dec 12 22:52:08.915196 containerd[1577]: 2025-12-12 22:52:08.877 [INFO][4444] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" host="localhost" Dec 12 22:52:08.915196 containerd[1577]: 2025-12-12 22:52:08.888 [INFO][4444] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" host="localhost" Dec 12 22:52:08.915196 containerd[1577]: 2025-12-12 22:52:08.888 [INFO][4444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" host="localhost" Dec 12 22:52:08.915196 containerd[1577]: 2025-12-12 22:52:08.888 [INFO][4444] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:08.915196 containerd[1577]: 2025-12-12 22:52:08.888 [INFO][4444] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" HandleID="k8s-pod-network.250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Workload="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.915303 containerd[1577]: 2025-12-12 22:52:08.893 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98a9aa8b-84da-46fd-aac6-c85c27e3b277", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-v6p5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec39ffa140a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:08.915353 containerd[1577]: 2025-12-12 22:52:08.894 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.915353 containerd[1577]: 2025-12-12 22:52:08.894 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec39ffa140a ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.915353 containerd[1577]: 2025-12-12 22:52:08.899 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.915416 containerd[1577]: 2025-12-12 22:52:08.899 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98a9aa8b-84da-46fd-aac6-c85c27e3b277", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef", Pod:"coredns-668d6bf9bc-v6p5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec39ffa140a", MAC:"b6:48:61:af:7b:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:08.915416 containerd[1577]: 2025-12-12 22:52:08.910 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v6p5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v6p5n-eth0" Dec 12 22:52:08.937000 audit[4475]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:08.937000 audit[4475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff4cf01d0 a2=0 a3=1 items=0 ppid=2887 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:08.944396 containerd[1577]: time="2025-12-12T22:52:08.944348486Z" level=info msg="connecting to shim 250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef" address="unix:///run/containerd/s/d54d1ac6ef60193888d55c26d050bbe434bc468475c074811a4464430af21c1c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:08.944000 audit[4475]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:08.944000 audit[4475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff4cf01d0 a2=0 a3=1 items=0 ppid=2887 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:08.974079 systemd[1]: Started cri-containerd-250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef.scope - libcontainer container 250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef. Dec 12 22:52:08.993000 audit: BPF prog-id=189 op=LOAD Dec 12 22:52:08.994000 audit: BPF prog-id=190 op=LOAD Dec 12 22:52:08.994000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.994000 audit: BPF prog-id=190 op=UNLOAD Dec 12 22:52:08.994000 audit[4495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.994000 audit: BPF prog-id=191 op=LOAD Dec 12 22:52:08.994000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.995000 audit: BPF prog-id=192 op=LOAD Dec 12 22:52:08.995000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.995000 audit: BPF prog-id=192 op=UNLOAD Dec 12 22:52:08.995000 audit[4495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.995000 audit: BPF prog-id=191 op=UNLOAD Dec 12 22:52:08.995000 audit[4495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.995000 audit: BPF prog-id=193 op=LOAD Dec 12 22:52:08.995000 audit[4495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:08.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306133326532656234666634306638623239656562396134383731 Dec 12 22:52:08.997910 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:08.999234 systemd-networkd[1292]: cali840d2b215f3: Link UP Dec 12 22:52:08.999839 systemd-networkd[1292]: cali840d2b215f3: Gained carrier Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.788 [INFO][4401] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.807 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0 coredns-668d6bf9bc- kube-system 74b7771b-32af-40f6-97b9-bf5ee39960ad 804 0 2025-12-12 22:51:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-cxx5s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali840d2b215f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.807 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4442] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" HandleID="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Workload="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4442] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" HandleID="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Workload="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004db30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-cxx5s", "timestamp":"2025-12-12 22:52:08.84407688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.844 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.888 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.888 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.956 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.961 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.967 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.971 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.976 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.976 [INFO][4442] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.979 [INFO][4442] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.985 [INFO][4442] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.991 [INFO][4442] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.992 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" host="localhost" Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.992 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:09.022163 containerd[1577]: 2025-12-12 22:52:08.992 [INFO][4442] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" HandleID="k8s-pod-network.61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Workload="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.022717 containerd[1577]: 2025-12-12 22:52:08.996 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"74b7771b-32af-40f6-97b9-bf5ee39960ad", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-cxx5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali840d2b215f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:09.022717 containerd[1577]: 2025-12-12 22:52:08.996 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.022717 containerd[1577]: 2025-12-12 22:52:08.996 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali840d2b215f3 ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.022717 containerd[1577]: 2025-12-12 22:52:09.000 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.022717 containerd[1577]: 2025-12-12 22:52:09.001 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"74b7771b-32af-40f6-97b9-bf5ee39960ad", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb", Pod:"coredns-668d6bf9bc-cxx5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali840d2b215f3", MAC:"4a:b6:e1:b5:ee:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:09.022717 containerd[1577]: 2025-12-12 22:52:09.017 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxx5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cxx5s-eth0" Dec 12 22:52:09.030055 containerd[1577]: time="2025-12-12T22:52:09.030003307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v6p5n,Uid:98a9aa8b-84da-46fd-aac6-c85c27e3b277,Namespace:kube-system,Attempt:0,} returns sandbox id \"250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef\"" Dec 12 22:52:09.031019 kubelet[2729]: E1212 22:52:09.030984 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:09.033778 containerd[1577]: time="2025-12-12T22:52:09.033740687Z" level=info msg="CreateContainer within sandbox \"250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 22:52:09.046487 containerd[1577]: time="2025-12-12T22:52:09.046436595Z" level=info msg="Container 128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:52:09.049307 containerd[1577]: time="2025-12-12T22:52:09.049256392Z" level=info msg="connecting to shim 61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb" address="unix:///run/containerd/s/b5508c5800e4becd0c2fb19fca95c26d98d4f41bce8ffbca8a33f064ff61c1c5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:09.053861 containerd[1577]: time="2025-12-12T22:52:09.053809864Z" level=info msg="CreateContainer within sandbox \"250a32e2eb4ff40f8b29eeb9a48719b3227161bad8fa937174604548ca8a3fef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3\"" Dec 12 22:52:09.054545 containerd[1577]: time="2025-12-12T22:52:09.054479459Z" level=info msg="StartContainer for \"128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3\"" Dec 12 22:52:09.055493 containerd[1577]: time="2025-12-12T22:52:09.055465650Z" level=info msg="connecting to shim 128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3" address="unix:///run/containerd/s/d54d1ac6ef60193888d55c26d050bbe434bc468475c074811a4464430af21c1c" protocol=ttrpc version=3 Dec 12 22:52:09.079804 systemd[1]: Started cri-containerd-128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3.scope - libcontainer container 128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3. Dec 12 22:52:09.084664 systemd[1]: Started cri-containerd-61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb.scope - libcontainer container 61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb. Dec 12 22:52:09.101000 audit: BPF prog-id=194 op=LOAD Dec 12 22:52:09.105568 systemd-networkd[1292]: cali12451f26a73: Link UP Dec 12 22:52:09.105725 systemd-networkd[1292]: cali12451f26a73: Gained carrier Dec 12 22:52:09.104000 audit: BPF prog-id=195 op=LOAD Dec 12 22:52:09.104000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.104000 audit: BPF prog-id=195 op=UNLOAD Dec 12 22:52:09.104000 audit[4548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.105000 audit: BPF prog-id=196 op=LOAD Dec 12 22:52:09.105000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.106000 audit: BPF prog-id=197 op=LOAD Dec 12 22:52:09.106000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.106000 audit: BPF prog-id=197 op=UNLOAD Dec 12 22:52:09.106000 audit[4548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.106000 audit: BPF prog-id=196 op=UNLOAD Dec 12 22:52:09.106000 audit[4548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.106000 audit: BPF prog-id=198 op=LOAD Dec 12 22:52:09.106000 audit: BPF prog-id=199 op=LOAD Dec 12 22:52:09.106000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4537 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643966613336353437336237346331643364353133663436306562 Dec 12 22:52:09.108829 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:09.108000 audit: BPF prog-id=200 op=LOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.108000 audit: BPF prog-id=200 op=UNLOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.108000 audit: BPF prog-id=201 op=LOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.108000 audit: BPF prog-id=202 op=LOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.108000 audit: BPF prog-id=202 op=UNLOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.108000 audit: BPF prog-id=201 op=UNLOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.108000 audit: BPF prog-id=203 op=LOAD Dec 12 22:52:09.108000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4484 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132383330376237303533653733346165373635396532636430326434 Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.788 [INFO][4409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.813 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0 calico-kube-controllers-6689b476fb- calico-system 4785c37b-156a-4faf-8cfd-f73b6f7355f4 812 0 2025-12-12 22:51:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6689b476fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6689b476fb-cz8sh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali12451f26a73 [] [] }} ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.813 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.851 [INFO][4454] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" HandleID="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Workload="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.851 [INFO][4454] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" HandleID="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Workload="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6689b476fb-cz8sh", "timestamp":"2025-12-12 22:52:08.851304837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.851 [INFO][4454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.992 [INFO][4454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:08.992 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.058 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.067 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.073 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.076 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.078 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.078 [INFO][4454] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.080 [INFO][4454] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.086 [INFO][4454] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.096 [INFO][4454] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.096 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" host="localhost" Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.096 [INFO][4454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:09.129785 containerd[1577]: 2025-12-12 22:52:09.096 [INFO][4454] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" HandleID="k8s-pod-network.6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Workload="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.130330 containerd[1577]: 2025-12-12 22:52:09.102 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0", GenerateName:"calico-kube-controllers-6689b476fb-", Namespace:"calico-system", SelfLink:"", UID:"4785c37b-156a-4faf-8cfd-f73b6f7355f4", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6689b476fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6689b476fb-cz8sh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12451f26a73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:09.130330 containerd[1577]: 2025-12-12 22:52:09.102 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.130330 containerd[1577]: 2025-12-12 22:52:09.102 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12451f26a73 ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.130330 containerd[1577]: 2025-12-12 22:52:09.105 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.130330 containerd[1577]: 2025-12-12 22:52:09.108 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0", GenerateName:"calico-kube-controllers-6689b476fb-", Namespace:"calico-system", SelfLink:"", UID:"4785c37b-156a-4faf-8cfd-f73b6f7355f4", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6689b476fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be", Pod:"calico-kube-controllers-6689b476fb-cz8sh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12451f26a73", MAC:"de:9e:de:f7:37:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:09.130330 containerd[1577]: 2025-12-12 22:52:09.120 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" Namespace="calico-system" Pod="calico-kube-controllers-6689b476fb-cz8sh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689b476fb--cz8sh-eth0" Dec 12 22:52:09.150089 containerd[1577]: time="2025-12-12T22:52:09.149903870Z" level=info msg="StartContainer for \"128307b7053e734ae7659e2cd02d4494005c09dee7e1efac8285804e610fa0b3\" returns successfully" Dec 12 22:52:09.154189 containerd[1577]: time="2025-12-12T22:52:09.154138266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxx5s,Uid:74b7771b-32af-40f6-97b9-bf5ee39960ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb\"" Dec 12 22:52:09.155901 kubelet[2729]: E1212 22:52:09.155862 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:09.158839 containerd[1577]: time="2025-12-12T22:52:09.158791069Z" level=info msg="CreateContainer within sandbox \"61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 22:52:09.166249 containerd[1577]: time="2025-12-12T22:52:09.166125694Z" level=info msg="connecting to shim 6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be" address="unix:///run/containerd/s/fcff5a02d09b67c3825e7154a920f025a45bcac4ed4ecb021672570f0f2c9790" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:09.171706 containerd[1577]: time="2025-12-12T22:52:09.171642434Z" level=info msg="Container 7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641: CDI devices from CRI Config.CDIDevices: []" Dec 12 22:52:09.180940 containerd[1577]: time="2025-12-12T22:52:09.180868192Z" level=info msg="CreateContainer within sandbox \"61d9fa365473b74c1d3d513f460ebf601f7f306c2551dbccb50ff26a9f3296fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641\"" Dec 12 22:52:09.183179 containerd[1577]: time="2025-12-12T22:52:09.183063839Z" level=info msg="StartContainer for \"7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641\"" Dec 12 22:52:09.202337 containerd[1577]: time="2025-12-12T22:52:09.202289201Z" level=info msg="connecting to shim 7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641" address="unix:///run/containerd/s/b5508c5800e4becd0c2fb19fca95c26d98d4f41bce8ffbca8a33f064ff61c1c5" protocol=ttrpc version=3 Dec 12 22:52:09.214903 systemd[1]: Started cri-containerd-6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be.scope - libcontainer container 6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be. Dec 12 22:52:09.224337 systemd[1]: Started cri-containerd-7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641.scope - libcontainer container 7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641. Dec 12 22:52:09.240000 audit: BPF prog-id=204 op=LOAD Dec 12 22:52:09.241000 audit: BPF prog-id=205 op=LOAD Dec 12 22:52:09.241000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.241000 audit: BPF prog-id=205 op=UNLOAD Dec 12 22:52:09.241000 audit[4648]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.242000 audit: BPF prog-id=206 op=LOAD Dec 12 22:52:09.242000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.242000 audit: BPF prog-id=207 op=LOAD Dec 12 22:52:09.242000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.242000 audit: BPF prog-id=207 op=UNLOAD Dec 12 22:52:09.242000 audit[4648]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.242000 audit: BPF prog-id=206 op=UNLOAD Dec 12 22:52:09.242000 audit[4648]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.242000 audit: BPF prog-id=208 op=LOAD Dec 12 22:52:09.242000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4537 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346430363832656161633061376465633762333937626432313166 Dec 12 22:52:09.263000 audit: BPF prog-id=209 op=LOAD Dec 12 22:52:09.264000 audit: BPF prog-id=210 op=LOAD Dec 12 22:52:09.264000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.264000 audit: BPF prog-id=210 op=UNLOAD Dec 12 22:52:09.264000 audit[4635]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.265000 audit: BPF prog-id=211 op=LOAD Dec 12 22:52:09.265000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.265000 audit: BPF prog-id=212 op=LOAD Dec 12 22:52:09.265000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.265000 audit: BPF prog-id=212 op=UNLOAD Dec 12 22:52:09.265000 audit[4635]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.265000 audit: BPF prog-id=211 op=UNLOAD Dec 12 22:52:09.265000 audit[4635]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.265000 audit: BPF prog-id=213 op=LOAD Dec 12 22:52:09.265000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4619 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323565616432613530313631323432303162303562616561383532 Dec 12 22:52:09.268940 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:09.288890 containerd[1577]: time="2025-12-12T22:52:09.288847694Z" level=info msg="StartContainer for \"7c4d0682eaac0a7dec7b397bd211f4da352a40fbacfe1f1d9945b7a2c555c641\" returns successfully" Dec 12 22:52:09.344491 containerd[1577]: time="2025-12-12T22:52:09.344353416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689b476fb-cz8sh,Uid:4785c37b-156a-4faf-8cfd-f73b6f7355f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"6825ead2a5016124201b05baea852a546bb0c2c4e66bfdb90f1206c1c7a673be\"" Dec 12 22:52:09.348961 containerd[1577]: time="2025-12-12T22:52:09.348495762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 22:52:09.560691 systemd-networkd[1292]: cali449c9f9be09: Gained IPv6LL Dec 12 22:52:09.561332 containerd[1577]: time="2025-12-12T22:52:09.561166558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:09.562874 containerd[1577]: time="2025-12-12T22:52:09.562816343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 22:52:09.563107 containerd[1577]: time="2025-12-12T22:52:09.562826224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:09.563371 kubelet[2729]: E1212 22:52:09.563311 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 22:52:09.564132 kubelet[2729]: E1212 22:52:09.563731 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 22:52:09.564341 kubelet[2729]: E1212 22:52:09.564253 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f2jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6689b476fb-cz8sh_calico-system(4785c37b-156a-4faf-8cfd-f73b6f7355f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:09.565535 kubelet[2729]: E1212 22:52:09.565493 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:52:09.764309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895850075.mount: Deactivated successfully. Dec 12 22:52:09.910977 kubelet[2729]: E1212 22:52:09.910857 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:52:09.911907 kubelet[2729]: E1212 22:52:09.911787 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:09.917276 kubelet[2729]: E1212 22:52:09.917016 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:09.918077 kubelet[2729]: E1212 22:52:09.918044 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:52:09.985675 kubelet[2729]: I1212 22:52:09.985595 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cxx5s" podStartSLOduration=35.985574844 podStartE2EDuration="35.985574844s" podCreationTimestamp="2025-12-12 22:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 22:52:09.984400112 +0000 UTC m=+42.328989641" watchObservedRunningTime="2025-12-12 22:52:09.985574844 +0000 UTC m=+42.330164373" Dec 12 22:52:09.999000 audit[4716]: NETFILTER_CFG table=filter:123 family=2 entries=19 op=nft_register_rule pid=4716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:09.999000 audit[4716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe31b7ec0 a2=0 a3=1 items=0 ppid=2887 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:09.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:10.006000 audit[4716]: NETFILTER_CFG table=nat:124 family=2 entries=33 op=nft_register_chain pid=4716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:10.006000 audit[4716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffe31b7ec0 a2=0 a3=1 items=0 ppid=2887 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:10.028926 kubelet[2729]: I1212 22:52:10.028298 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v6p5n" podStartSLOduration=36.028277483 podStartE2EDuration="36.028277483s" podCreationTimestamp="2025-12-12 22:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 22:52:10.003891976 +0000 UTC m=+42.348481505" watchObservedRunningTime="2025-12-12 22:52:10.028277483 +0000 UTC m=+42.372866972" Dec 12 22:52:10.053000 audit[4719]: NETFILTER_CFG table=filter:125 family=2 entries=16 op=nft_register_rule pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:10.053000 audit[4719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc4131330 a2=0 a3=1 items=0 ppid=2887 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:10.072667 systemd-networkd[1292]: cali840d2b215f3: Gained IPv6LL Dec 12 22:52:10.075000 audit[4719]: NETFILTER_CFG table=nat:126 family=2 entries=54 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:10.075000 audit[4719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=ffffc4131330 a2=0 a3=1 items=0 ppid=2887 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:10.355506 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:53308.service - OpenSSH per-connection server daemon (10.0.0.1:53308). Dec 12 22:52:10.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.28:22-10.0.0.1:53308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:10.438995 kubelet[2729]: I1212 22:52:10.438955 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 22:52:10.438000 audit[4728]: USER_ACCT pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:10.441258 sshd[4728]: Accepted publickey for core from 10.0.0.1 port 53308 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:10.442474 kubelet[2729]: E1212 22:52:10.441556 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:10.441000 audit[4728]: CRED_ACQ pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:10.441000 audit[4728]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd62e6bb0 a2=3 a3=0 items=0 ppid=1 pid=4728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:10.444336 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:10.457265 systemd-logind[1556]: New session 10 of user core. Dec 12 22:52:10.463846 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 22:52:10.467000 audit[4728]: USER_START pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:10.469000 audit[4746]: CRED_ACQ pid=4746 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:10.520730 systemd-networkd[1292]: cali12451f26a73: Gained IPv6LL Dec 12 22:52:10.647601 sshd[4746]: Connection closed by 10.0.0.1 port 53308 Dec 12 22:52:10.648447 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:10.649043 systemd-networkd[1292]: caliec39ffa140a: Gained IPv6LL Dec 12 22:52:10.649000 audit[4728]: USER_END pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:10.649000 audit[4728]: CRED_DISP pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:10.654433 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Dec 12 22:52:10.654816 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:53308.service: Deactivated successfully. Dec 12 22:52:10.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.28:22-10.0.0.1:53308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:10.656983 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 22:52:10.658974 systemd-logind[1556]: Removed session 10. Dec 12 22:52:10.742632 containerd[1577]: time="2025-12-12T22:52:10.742506675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-ctm57,Uid:a816068e-6aeb-4536-8ece-56ad68b4e384,Namespace:calico-apiserver,Attempt:0,}" Dec 12 22:52:10.743026 containerd[1577]: time="2025-12-12T22:52:10.742579563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-lb748,Uid:22da6c49-6a9b-4270-91c0-2f3cf459c08b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 22:52:10.885934 systemd-networkd[1292]: calidcb1490effd: Link UP Dec 12 22:52:10.886165 systemd-networkd[1292]: calidcb1490effd: Gained carrier Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.788 [INFO][4792] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.802 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0 calico-apiserver-756859cb7d- calico-apiserver 22da6c49-6a9b-4270-91c0-2f3cf459c08b 815 0 2025-12-12 22:51:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:756859cb7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-756859cb7d-lb748 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidcb1490effd [] [] }} ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.802 [INFO][4792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.835 [INFO][4812] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" HandleID="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Workload="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4812] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" HandleID="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Workload="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-756859cb7d-lb748", "timestamp":"2025-12-12 22:52:10.835841963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.847 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.854 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.860 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.862 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.865 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.865 [INFO][4812] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.867 [INFO][4812] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.872 [INFO][4812] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.879 [INFO][4812] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.879 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" host="localhost" Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.879 [INFO][4812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:10.901799 containerd[1577]: 2025-12-12 22:52:10.879 [INFO][4812] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" HandleID="k8s-pod-network.a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Workload="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.902415 containerd[1577]: 2025-12-12 22:52:10.882 [INFO][4792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0", GenerateName:"calico-apiserver-756859cb7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"22da6c49-6a9b-4270-91c0-2f3cf459c08b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756859cb7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-756859cb7d-lb748", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcb1490effd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:10.902415 containerd[1577]: 2025-12-12 22:52:10.882 [INFO][4792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.902415 containerd[1577]: 2025-12-12 22:52:10.882 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcb1490effd ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.902415 containerd[1577]: 2025-12-12 22:52:10.884 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.902415 containerd[1577]: 2025-12-12 22:52:10.885 [INFO][4792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0", GenerateName:"calico-apiserver-756859cb7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"22da6c49-6a9b-4270-91c0-2f3cf459c08b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756859cb7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa", Pod:"calico-apiserver-756859cb7d-lb748", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcb1490effd", MAC:"56:b3:ff:1b:86:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:10.902415 containerd[1577]: 2025-12-12 22:52:10.897 [INFO][4792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-lb748" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--lb748-eth0" Dec 12 22:52:10.919469 kubelet[2729]: E1212 22:52:10.918757 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:10.919469 kubelet[2729]: E1212 22:52:10.919381 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:10.920073 kubelet[2729]: E1212 22:52:10.920049 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:10.923723 kubelet[2729]: E1212 22:52:10.923666 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:52:10.931177 containerd[1577]: time="2025-12-12T22:52:10.931129024Z" level=info msg="connecting to shim a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa" address="unix:///run/containerd/s/d7bd88517a2a9bb94d323d70a13bc0df71b186e0562f86c5a821e1042ed7306d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:10.960799 systemd[1]: Started cri-containerd-a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa.scope - libcontainer container a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa. Dec 12 22:52:10.974000 audit: BPF prog-id=214 op=LOAD Dec 12 22:52:10.974000 audit: BPF prog-id=215 op=LOAD Dec 12 22:52:10.974000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.975000 audit: BPF prog-id=215 op=UNLOAD Dec 12 22:52:10.975000 audit[4851]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.975000 audit: BPF prog-id=216 op=LOAD Dec 12 22:52:10.975000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.975000 audit: BPF prog-id=217 op=LOAD Dec 12 22:52:10.975000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.975000 audit: BPF prog-id=217 op=UNLOAD Dec 12 22:52:10.975000 audit[4851]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.975000 audit: BPF prog-id=216 op=UNLOAD Dec 12 22:52:10.975000 audit[4851]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.975000 audit: BPF prog-id=218 op=LOAD Dec 12 22:52:10.975000 audit[4851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4838 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:10.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643436343237353433313735306566343934646539346566616430 Dec 12 22:52:10.978440 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:10.998739 systemd-networkd[1292]: calid5f95f42413: Link UP Dec 12 22:52:10.999445 systemd-networkd[1292]: calid5f95f42413: Gained carrier Dec 12 22:52:11.013010 containerd[1577]: time="2025-12-12T22:52:11.012965781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-lb748,Uid:22da6c49-6a9b-4270-91c0-2f3cf459c08b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a6d464275431750ef494de94efad0ca81d8f7122cb248b118dc85277eaabd3fa\"" Dec 12 22:52:11.015483 containerd[1577]: time="2025-12-12T22:52:11.015445845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.782 [INFO][4780] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.800 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0 calico-apiserver-756859cb7d- calico-apiserver a816068e-6aeb-4536-8ece-56ad68b4e384 811 0 2025-12-12 22:51:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:756859cb7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-756859cb7d-ctm57 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid5f95f42413 [] [] }} ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.800 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4806] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" HandleID="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Workload="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4806] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" HandleID="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Workload="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003555c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-756859cb7d-ctm57", "timestamp":"2025-12-12 22:52:10.836504396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.836 [INFO][4806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.879 [INFO][4806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.879 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.947 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.958 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.965 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.968 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.972 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.972 [INFO][4806] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.974 [INFO][4806] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40 Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.980 [INFO][4806] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.989 [INFO][4806] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.989 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" host="localhost" Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.989 [INFO][4806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:11.020473 containerd[1577]: 2025-12-12 22:52:10.989 [INFO][4806] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" HandleID="k8s-pod-network.3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Workload="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.021124 containerd[1577]: 2025-12-12 22:52:10.993 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0", GenerateName:"calico-apiserver-756859cb7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a816068e-6aeb-4536-8ece-56ad68b4e384", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756859cb7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-756859cb7d-ctm57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5f95f42413", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:11.021124 containerd[1577]: 2025-12-12 22:52:10.993 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.021124 containerd[1577]: 2025-12-12 22:52:10.993 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5f95f42413 ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.021124 containerd[1577]: 2025-12-12 22:52:10.999 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.021124 containerd[1577]: 2025-12-12 22:52:11.004 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0", GenerateName:"calico-apiserver-756859cb7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a816068e-6aeb-4536-8ece-56ad68b4e384", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756859cb7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40", Pod:"calico-apiserver-756859cb7d-ctm57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5f95f42413", MAC:"de:c1:f4:ac:d1:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:11.021124 containerd[1577]: 2025-12-12 22:52:11.017 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" Namespace="calico-apiserver" Pod="calico-apiserver-756859cb7d-ctm57" WorkloadEndpoint="localhost-k8s-calico--apiserver--756859cb7d--ctm57-eth0" Dec 12 22:52:11.041916 containerd[1577]: time="2025-12-12T22:52:11.041868499Z" level=info msg="connecting to shim 3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40" address="unix:///run/containerd/s/22bf1238f5e6b8c8c60971e70c75ace1afed3482d34568f5b0005419ff72e138" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:11.073821 systemd[1]: Started cri-containerd-3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40.scope - libcontainer container 3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40. Dec 12 22:52:11.090000 audit: BPF prog-id=219 op=LOAD Dec 12 22:52:11.090000 audit: BPF prog-id=220 op=LOAD Dec 12 22:52:11.090000 audit[4913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.090000 audit: BPF prog-id=220 op=UNLOAD Dec 12 22:52:11.090000 audit[4913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.090000 audit: BPF prog-id=221 op=LOAD Dec 12 22:52:11.090000 audit[4913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.090000 audit: BPF prog-id=222 op=LOAD Dec 12 22:52:11.090000 audit[4913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.091000 audit: BPF prog-id=222 op=UNLOAD Dec 12 22:52:11.091000 audit[4913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.091000 audit: BPF prog-id=221 op=UNLOAD Dec 12 22:52:11.091000 audit[4913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.091000 audit: BPF prog-id=223 op=LOAD Dec 12 22:52:11.091000 audit[4913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4900 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313866366664656565366139383835323833353133623438666130 Dec 12 22:52:11.094589 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:11.099000 audit[4933]: NETFILTER_CFG table=filter:127 family=2 entries=15 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:11.099000 audit[4933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9fd5530 a2=0 a3=1 items=0 ppid=2887 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:11.104000 audit[4933]: NETFILTER_CFG table=nat:128 family=2 entries=25 op=nft_register_chain pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:11.104000 audit[4933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=fffff9fd5530 a2=0 a3=1 items=0 ppid=2887 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:11.123248 containerd[1577]: time="2025-12-12T22:52:11.123045542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756859cb7d-ctm57,Uid:a816068e-6aeb-4536-8ece-56ad68b4e384,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3018f6fdeee6a9885283513b48fa0af00bec0f07ab602b5511bb446947bb3b40\"" Dec 12 22:52:11.123000 audit: BPF prog-id=224 op=LOAD Dec 12 22:52:11.123000 audit[4948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0dc95b8 a2=98 a3=ffffc0dc95a8 items=0 ppid=4870 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.123000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 22:52:11.123000 audit: BPF prog-id=224 op=UNLOAD Dec 12 22:52:11.123000 audit[4948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc0dc9588 a3=0 items=0 ppid=4870 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.123000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 22:52:11.123000 audit: BPF prog-id=225 op=LOAD Dec 12 22:52:11.123000 audit[4948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0dc9468 a2=74 a3=95 items=0 ppid=4870 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.123000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 22:52:11.124000 audit: BPF prog-id=225 op=UNLOAD Dec 12 22:52:11.124000 audit[4948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4870 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.124000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 22:52:11.124000 audit: BPF prog-id=226 op=LOAD Dec 12 22:52:11.124000 audit[4948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0dc9498 a2=40 a3=ffffc0dc94c8 items=0 ppid=4870 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.124000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 22:52:11.124000 audit: BPF prog-id=226 op=UNLOAD Dec 12 22:52:11.124000 audit[4948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc0dc94c8 items=0 ppid=4870 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.124000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 22:52:11.126000 audit: BPF prog-id=227 op=LOAD Dec 12 22:52:11.126000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc84f2bb8 a2=98 a3=ffffc84f2ba8 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.126000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.126000 audit: BPF prog-id=227 op=UNLOAD Dec 12 22:52:11.126000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc84f2b88 a3=0 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.126000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.126000 audit: BPF prog-id=228 op=LOAD Dec 12 22:52:11.126000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc84f2848 a2=74 a3=95 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.126000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.127000 audit: BPF prog-id=228 op=UNLOAD Dec 12 22:52:11.127000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.127000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.127000 audit: BPF prog-id=229 op=LOAD Dec 12 22:52:11.127000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc84f28a8 a2=94 a3=2 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.127000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.127000 audit: BPF prog-id=229 op=UNLOAD Dec 12 22:52:11.127000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.127000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.202790 containerd[1577]: time="2025-12-12T22:52:11.202743547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:11.204125 containerd[1577]: time="2025-12-12T22:52:11.204086490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 22:52:11.204390 containerd[1577]: time="2025-12-12T22:52:11.204147097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:11.204587 kubelet[2729]: E1212 22:52:11.204516 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:11.204670 kubelet[2729]: E1212 22:52:11.204603 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:11.204886 kubelet[2729]: E1212 22:52:11.204834 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl4cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756859cb7d-lb748_calico-apiserver(22da6c49-6a9b-4270-91c0-2f3cf459c08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:11.205951 containerd[1577]: time="2025-12-12T22:52:11.205772750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 22:52:11.206128 kubelet[2729]: E1212 22:52:11.205941 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:52:11.224000 audit: BPF prog-id=230 op=LOAD Dec 12 22:52:11.224000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc84f2868 a2=40 a3=ffffc84f2898 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.224000 audit: BPF prog-id=230 op=UNLOAD Dec 12 22:52:11.224000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc84f2898 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.234000 audit: BPF prog-id=231 op=LOAD Dec 12 22:52:11.234000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc84f2878 a2=94 a3=4 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.234000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.234000 audit: BPF prog-id=231 op=UNLOAD Dec 12 22:52:11.234000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.234000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.234000 audit: BPF prog-id=232 op=LOAD Dec 12 22:52:11.234000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc84f26b8 a2=94 a3=5 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.234000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.234000 audit: BPF prog-id=232 op=UNLOAD Dec 12 22:52:11.234000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.234000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.234000 audit: BPF prog-id=233 op=LOAD Dec 12 22:52:11.234000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc84f28e8 a2=94 a3=6 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.234000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.234000 audit: BPF prog-id=233 op=UNLOAD Dec 12 22:52:11.234000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.234000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.235000 audit: BPF prog-id=234 op=LOAD Dec 12 22:52:11.235000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc84f20b8 a2=94 a3=83 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.235000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.235000 audit: BPF prog-id=235 op=LOAD Dec 12 22:52:11.235000 audit[4949]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc84f1e78 a2=94 a3=2 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.235000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.235000 audit: BPF prog-id=235 op=UNLOAD Dec 12 22:52:11.235000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.235000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.236000 audit: BPF prog-id=234 op=UNLOAD Dec 12 22:52:11.236000 audit[4949]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1ecd1620 a3=1ecc4b00 items=0 ppid=4870 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.236000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 22:52:11.246000 audit: BPF prog-id=236 op=LOAD Dec 12 22:52:11.246000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa01b938 a2=98 a3=fffffa01b928 items=0 ppid=4870 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.246000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 22:52:11.246000 audit: BPF prog-id=236 op=UNLOAD Dec 12 22:52:11.246000 audit[4952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffa01b908 a3=0 items=0 ppid=4870 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.246000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 22:52:11.247000 audit: BPF prog-id=237 op=LOAD Dec 12 22:52:11.247000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa01b7e8 a2=74 a3=95 items=0 ppid=4870 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.247000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 22:52:11.247000 audit: BPF prog-id=237 op=UNLOAD Dec 12 22:52:11.247000 audit[4952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4870 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.247000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 22:52:11.247000 audit: BPF prog-id=238 op=LOAD Dec 12 22:52:11.247000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa01b818 a2=40 a3=fffffa01b848 items=0 ppid=4870 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.247000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 22:52:11.247000 audit: BPF prog-id=238 op=UNLOAD Dec 12 22:52:11.247000 audit[4952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffffa01b848 items=0 ppid=4870 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.247000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 22:52:11.305894 systemd-networkd[1292]: vxlan.calico: Link UP Dec 12 22:52:11.305901 systemd-networkd[1292]: vxlan.calico: Gained carrier Dec 12 22:52:11.325000 audit: BPF prog-id=239 op=LOAD Dec 12 22:52:11.325000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed14f1b8 a2=98 a3=ffffed14f1a8 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.325000 audit: BPF prog-id=239 op=UNLOAD Dec 12 22:52:11.325000 audit[4977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffed14f188 a3=0 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.325000 audit: BPF prog-id=240 op=LOAD Dec 12 22:52:11.325000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed14ee98 a2=74 a3=95 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=240 op=UNLOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=241 op=LOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed14eef8 a2=94 a3=2 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=241 op=UNLOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=242 op=LOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed14ed78 a2=40 a3=ffffed14eda8 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=242 op=UNLOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffed14eda8 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=243 op=LOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed14eec8 a2=94 a3=b7 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=243 op=UNLOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=244 op=LOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed14e578 a2=94 a3=2 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=244 op=UNLOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.326000 audit: BPF prog-id=245 op=LOAD Dec 12 22:52:11.326000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed14e708 a2=94 a3=30 items=0 ppid=4870 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 22:52:11.329000 audit: BPF prog-id=246 op=LOAD Dec 12 22:52:11.329000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8a35298 a2=98 a3=ffffc8a35288 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.329000 audit: BPF prog-id=246 op=UNLOAD Dec 12 22:52:11.329000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc8a35268 a3=0 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.329000 audit: BPF prog-id=247 op=LOAD Dec 12 22:52:11.329000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc8a34f28 a2=74 a3=95 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.329000 audit: BPF prog-id=247 op=UNLOAD Dec 12 22:52:11.329000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.329000 audit: BPF prog-id=248 op=LOAD Dec 12 22:52:11.329000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc8a34f88 a2=94 a3=2 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.329000 audit: BPF prog-id=248 op=UNLOAD Dec 12 22:52:11.329000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.403359 containerd[1577]: time="2025-12-12T22:52:11.403301741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:11.404765 containerd[1577]: time="2025-12-12T22:52:11.404719692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 22:52:11.405053 containerd[1577]: time="2025-12-12T22:52:11.404804101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:11.405353 kubelet[2729]: E1212 22:52:11.405302 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:11.405427 kubelet[2729]: E1212 22:52:11.405367 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:11.405557 kubelet[2729]: E1212 22:52:11.405497 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5h9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756859cb7d-ctm57_calico-apiserver(a816068e-6aeb-4536-8ece-56ad68b4e384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:11.406777 kubelet[2729]: E1212 22:52:11.406712 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:52:11.434000 audit: BPF prog-id=249 op=LOAD Dec 12 22:52:11.434000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc8a34f48 a2=40 a3=ffffc8a34f78 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.434000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.434000 audit: BPF prog-id=249 op=UNLOAD Dec 12 22:52:11.434000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc8a34f78 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.434000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.446000 audit: BPF prog-id=250 op=LOAD Dec 12 22:52:11.446000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc8a34f58 a2=94 a3=4 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.446000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.446000 audit: BPF prog-id=250 op=UNLOAD Dec 12 22:52:11.446000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.446000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=251 op=LOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc8a34d98 a2=94 a3=5 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=251 op=UNLOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=252 op=LOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc8a34fc8 a2=94 a3=6 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=252 op=UNLOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=253 op=LOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc8a34798 a2=94 a3=83 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=254 op=LOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc8a34558 a2=94 a3=2 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.447000 audit: BPF prog-id=254 op=UNLOAD Dec 12 22:52:11.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.448000 audit: BPF prog-id=253 op=UNLOAD Dec 12 22:52:11.448000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=7601620 a3=75f4b00 items=0 ppid=4870 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.448000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 22:52:11.455000 audit: BPF prog-id=245 op=UNLOAD Dec 12 22:52:11.455000 audit[4870]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400096a140 a2=0 a3=0 items=0 ppid=3971 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.455000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 22:52:11.524000 audit[5023]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 22:52:11.524000 audit[5023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc0b4a670 a2=0 a3=ffff896befa8 items=0 ppid=4870 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.524000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 22:52:11.527000 audit[5024]: NETFILTER_CFG table=mangle:130 family=2 entries=16 op=nft_register_chain pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 22:52:11.527000 audit[5024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffde240c30 a2=0 a3=ffffa90b4fa8 items=0 ppid=4870 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.527000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 22:52:11.536000 audit[5027]: NETFILTER_CFG table=raw:131 family=2 entries=21 op=nft_register_chain pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 22:52:11.536000 audit[5027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe6ee62f0 a2=0 a3=ffff88322fa8 items=0 ppid=4870 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.536000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 22:52:11.536000 audit[5025]: NETFILTER_CFG table=filter:132 family=2 entries=293 op=nft_register_chain pid=5025 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 22:52:11.536000 audit[5025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=173940 a0=3 a1=ffffd01d1040 a2=0 a3=ffff82855fa8 items=0 ppid=4870 pid=5025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:11.536000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 22:52:11.923496 kubelet[2729]: E1212 22:52:11.923449 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:52:11.925098 kubelet[2729]: E1212 22:52:11.924439 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:11.925204 kubelet[2729]: E1212 22:52:11.925122 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:11.925985 kubelet[2729]: E1212 22:52:11.925623 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:52:12.119000 audit[5038]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:12.119000 audit[5038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe7e78d90 a2=0 a3=1 items=0 ppid=2887 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:12.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:12.120935 systemd-networkd[1292]: calid5f95f42413: Gained IPv6LL Dec 12 22:52:12.135000 audit[5038]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:12.135000 audit[5038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe7e78d90 a2=0 a3=1 items=0 ppid=2887 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:12.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:12.248742 systemd-networkd[1292]: calidcb1490effd: Gained IPv6LL Dec 12 22:52:12.742785 containerd[1577]: time="2025-12-12T22:52:12.742740139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nknqg,Uid:3344cb69-6333-4eee-b470-ee8fe022cd55,Namespace:calico-system,Attempt:0,}" Dec 12 22:52:12.890114 systemd-networkd[1292]: vxlan.calico: Gained IPv6LL Dec 12 22:52:12.927680 kubelet[2729]: E1212 22:52:12.927364 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:52:12.929122 kubelet[2729]: E1212 22:52:12.929033 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:52:12.942272 systemd-networkd[1292]: cali93ce71cf1e8: Link UP Dec 12 22:52:12.942690 systemd-networkd[1292]: cali93ce71cf1e8: Gained carrier Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.829 [INFO][5040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nknqg-eth0 csi-node-driver- calico-system 3344cb69-6333-4eee-b470-ee8fe022cd55 710 0 2025-12-12 22:51:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nknqg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali93ce71cf1e8 [] [] }} ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.829 [INFO][5040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.864 [INFO][5054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" HandleID="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Workload="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.864 [INFO][5054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" HandleID="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Workload="localhost-k8s-csi--node--driver--nknqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nknqg", "timestamp":"2025-12-12 22:52:12.86428091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.864 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.864 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.864 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.877 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.883 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.891 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.893 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.898 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.898 [INFO][5054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.901 [INFO][5054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9 Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.911 [INFO][5054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.936 [INFO][5054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.936 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" host="localhost" Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.936 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 22:52:12.982875 containerd[1577]: 2025-12-12 22:52:12.936 [INFO][5054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" HandleID="k8s-pod-network.881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Workload="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:12.983542 containerd[1577]: 2025-12-12 22:52:12.939 [INFO][5040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nknqg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3344cb69-6333-4eee-b470-ee8fe022cd55", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nknqg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93ce71cf1e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:12.983542 containerd[1577]: 2025-12-12 22:52:12.939 [INFO][5040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:12.983542 containerd[1577]: 2025-12-12 22:52:12.939 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93ce71cf1e8 ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:12.983542 containerd[1577]: 2025-12-12 22:52:12.943 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:12.983542 containerd[1577]: 2025-12-12 22:52:12.943 [INFO][5040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nknqg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3344cb69-6333-4eee-b470-ee8fe022cd55", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 22, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9", Pod:"csi-node-driver-nknqg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93ce71cf1e8", MAC:"8a:b4:7e:c8:ee:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 22:52:12.983542 containerd[1577]: 2025-12-12 22:52:12.976 [INFO][5040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" Namespace="calico-system" Pod="csi-node-driver-nknqg" WorkloadEndpoint="localhost-k8s-csi--node--driver--nknqg-eth0" Dec 12 22:52:13.007000 audit[5070]: NETFILTER_CFG table=filter:135 family=2 entries=56 op=nft_register_chain pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 22:52:13.009314 kernel: kauditd_printk_skb: 405 callbacks suppressed Dec 12 22:52:13.009396 kernel: audit: type=1325 audit(1765579933.007:755): table=filter:135 family=2 entries=56 op=nft_register_chain pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 22:52:13.007000 audit[5070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffd24e4220 a2=0 a3=ffff97e3bfa8 items=0 ppid=4870 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.016565 kernel: audit: type=1300 audit(1765579933.007:755): arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffd24e4220 a2=0 a3=ffff97e3bfa8 items=0 ppid=4870 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.007000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 22:52:13.019475 kernel: audit: type=1327 audit(1765579933.007:755): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 22:52:13.031549 containerd[1577]: time="2025-12-12T22:52:13.031403052Z" level=info msg="connecting to shim 881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9" address="unix:///run/containerd/s/0473c775127aff5e06ae982549977c317779b93147c1515576ab1ff674b3bac3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 22:52:13.062867 systemd[1]: Started cri-containerd-881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9.scope - libcontainer container 881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9. Dec 12 22:52:13.075000 audit: BPF prog-id=255 op=LOAD Dec 12 22:52:13.077556 kernel: audit: type=1334 audit(1765579933.075:756): prog-id=255 op=LOAD Dec 12 22:52:13.077000 audit: BPF prog-id=256 op=LOAD Dec 12 22:52:13.077000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.082847 kernel: audit: type=1334 audit(1765579933.077:757): prog-id=256 op=LOAD Dec 12 22:52:13.082944 kernel: audit: type=1300 audit(1765579933.077:757): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.082965 kernel: audit: type=1327 audit(1765579933.077:757): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.078000 audit: BPF prog-id=256 op=UNLOAD Dec 12 22:52:13.087636 kernel: audit: type=1334 audit(1765579933.078:758): prog-id=256 op=UNLOAD Dec 12 22:52:13.087686 kernel: audit: type=1300 audit(1765579933.078:758): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.078000 audit[5090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.091974 systemd-resolved[1243]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 22:52:13.095159 kernel: audit: type=1327 audit(1765579933.078:758): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.078000 audit: BPF prog-id=257 op=LOAD Dec 12 22:52:13.078000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.081000 audit: BPF prog-id=258 op=LOAD Dec 12 22:52:13.081000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.082000 audit: BPF prog-id=258 op=UNLOAD Dec 12 22:52:13.082000 audit[5090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.082000 audit: BPF prog-id=257 op=UNLOAD Dec 12 22:52:13.082000 audit[5090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.084000 audit: BPF prog-id=259 op=LOAD Dec 12 22:52:13.084000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:13.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838316465343362333830666464303239386532316261303537333238 Dec 12 22:52:13.127801 containerd[1577]: time="2025-12-12T22:52:13.127728201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nknqg,Uid:3344cb69-6333-4eee-b470-ee8fe022cd55,Namespace:calico-system,Attempt:0,} returns sandbox id \"881de43b380fdd0298e21ba0573282ba2070f3f42ea75e3d31ab2e3008efdec9\"" Dec 12 22:52:13.129666 containerd[1577]: time="2025-12-12T22:52:13.129477858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 22:52:13.352178 containerd[1577]: time="2025-12-12T22:52:13.352013782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:13.357061 containerd[1577]: time="2025-12-12T22:52:13.356973844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 22:52:13.357217 containerd[1577]: time="2025-12-12T22:52:13.357094176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:13.357308 kubelet[2729]: E1212 22:52:13.357263 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 22:52:13.357394 kubelet[2729]: E1212 22:52:13.357328 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 22:52:13.357721 kubelet[2729]: E1212 22:52:13.357475 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:13.359879 containerd[1577]: time="2025-12-12T22:52:13.359842094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 22:52:13.551962 containerd[1577]: time="2025-12-12T22:52:13.551900173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:13.553084 containerd[1577]: time="2025-12-12T22:52:13.553038689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 22:52:13.553225 containerd[1577]: time="2025-12-12T22:52:13.553121657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:13.553405 kubelet[2729]: E1212 22:52:13.553351 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 22:52:13.553490 kubelet[2729]: E1212 22:52:13.553417 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 22:52:13.553623 kubelet[2729]: E1212 22:52:13.553553 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:13.554944 kubelet[2729]: E1212 22:52:13.554863 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:52:13.932978 kubelet[2729]: E1212 22:52:13.932923 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:52:14.680706 systemd-networkd[1292]: cali93ce71cf1e8: Gained IPv6LL Dec 12 22:52:14.744980 containerd[1577]: time="2025-12-12T22:52:14.744672486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 22:52:14.921731 containerd[1577]: time="2025-12-12T22:52:14.921678099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:14.923022 containerd[1577]: time="2025-12-12T22:52:14.922970347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 22:52:14.923267 containerd[1577]: time="2025-12-12T22:52:14.923025713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:14.923508 kubelet[2729]: E1212 22:52:14.923469 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 22:52:14.923620 kubelet[2729]: E1212 22:52:14.923555 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 22:52:14.924232 kubelet[2729]: E1212 22:52:14.924043 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d7e34ffea647f9980c1d6396170425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5psvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-669f8df78-7hr8p_calico-system(798aee2d-22cd-4354-84e3-047cb77c02aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:14.926394 containerd[1577]: time="2025-12-12T22:52:14.926354202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 22:52:14.937307 kubelet[2729]: E1212 22:52:14.937159 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:52:15.138766 containerd[1577]: time="2025-12-12T22:52:15.138716640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:15.164088 containerd[1577]: time="2025-12-12T22:52:15.164017524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 22:52:15.164226 containerd[1577]: time="2025-12-12T22:52:15.164078810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:15.164439 kubelet[2729]: E1212 22:52:15.164379 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 22:52:15.164648 kubelet[2729]: E1212 22:52:15.164441 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 22:52:15.164648 kubelet[2729]: E1212 22:52:15.164588 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5psvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-669f8df78-7hr8p_calico-system(798aee2d-22cd-4354-84e3-047cb77c02aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:15.166237 kubelet[2729]: E1212 22:52:15.166187 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669f8df78-7hr8p" podUID="798aee2d-22cd-4354-84e3-047cb77c02aa" Dec 12 22:52:15.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.28:22-10.0.0.1:46810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:15.674917 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:46810.service - OpenSSH per-connection server daemon (10.0.0.1:46810). Dec 12 22:52:15.739000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:15.740581 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 46810 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:15.740000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:15.741000 audit[5118]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebc81910 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:15.741000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:15.742366 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:15.751419 systemd-logind[1556]: New session 11 of user core. Dec 12 22:52:15.765788 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 22:52:15.769000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:15.771000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:15.928904 sshd[5122]: Connection closed by 10.0.0.1 port 46810 Dec 12 22:52:15.929761 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:15.930000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:15.930000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:15.940351 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:46810.service: Deactivated successfully. Dec 12 22:52:15.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.28:22-10.0.0.1:46810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:15.943191 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 22:52:15.944915 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Dec 12 22:52:15.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.28:22-10.0.0.1:46814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:15.949240 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:46814.service - OpenSSH per-connection server daemon (10.0.0.1:46814). Dec 12 22:52:15.950180 systemd-logind[1556]: Removed session 11. Dec 12 22:52:16.015000 audit[5136]: USER_ACCT pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.016062 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 46814 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:16.016000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.016000 audit[5136]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdef20d0 a2=3 a3=0 items=0 ppid=1 pid=5136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:16.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:16.017927 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:16.024955 systemd-logind[1556]: New session 12 of user core. Dec 12 22:52:16.032769 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 22:52:16.034000 audit[5136]: USER_START pid=5136 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.036000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.177026 sshd[5140]: Connection closed by 10.0.0.1 port 46814 Dec 12 22:52:16.177482 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:16.180000 audit[5136]: USER_END pid=5136 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.180000 audit[5136]: CRED_DISP pid=5136 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.186966 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:46814.service: Deactivated successfully. Dec 12 22:52:16.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.28:22-10.0.0.1:46814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:16.189328 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 22:52:16.190803 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Dec 12 22:52:16.198080 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:46828.service - OpenSSH per-connection server daemon (10.0.0.1:46828). Dec 12 22:52:16.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.28:22-10.0.0.1:46828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:16.199692 systemd-logind[1556]: Removed session 12. Dec 12 22:52:16.269000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.270140 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 46828 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:16.270000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.270000 audit[5154]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc37f8770 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:16.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:16.272195 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:16.276986 systemd-logind[1556]: New session 13 of user core. Dec 12 22:52:16.285787 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 22:52:16.288000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.290000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.404017 sshd[5158]: Connection closed by 10.0.0.1 port 46828 Dec 12 22:52:16.404656 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:16.405000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.405000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:16.409323 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:46828.service: Deactivated successfully. Dec 12 22:52:16.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.28:22-10.0.0.1:46828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:16.411258 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 22:52:16.412276 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Dec 12 22:52:16.413822 systemd-logind[1556]: Removed session 13. Dec 12 22:52:20.744020 containerd[1577]: time="2025-12-12T22:52:20.743697045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 22:52:20.935821 containerd[1577]: time="2025-12-12T22:52:20.935756940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:20.937238 containerd[1577]: time="2025-12-12T22:52:20.937181585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 22:52:20.937349 containerd[1577]: time="2025-12-12T22:52:20.937277953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:20.937687 kubelet[2729]: E1212 22:52:20.937445 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 22:52:20.937687 kubelet[2729]: E1212 22:52:20.937539 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 22:52:20.938107 kubelet[2729]: E1212 22:52:20.937725 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p526l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls8tz_calico-system(0e402111-6f81-4248-9211-701497715292): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:20.939202 kubelet[2729]: E1212 22:52:20.939119 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:52:21.098602 kernel: hrtimer: interrupt took 5125480 ns Dec 12 22:52:21.420843 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:59600.service - OpenSSH per-connection server daemon (10.0.0.1:59600). Dec 12 22:52:21.425275 kernel: kauditd_printk_skb: 48 callbacks suppressed Dec 12 22:52:21.425327 kernel: audit: type=1130 audit(1765579941.420:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.28:22-10.0.0.1:59600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:21.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.28:22-10.0.0.1:59600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:21.479000 audit[5188]: USER_ACCT pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.481402 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 59600 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:21.482000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.484874 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:21.490383 kernel: audit: type=1101 audit(1765579941.479:792): pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.490455 kernel: audit: type=1103 audit(1765579941.482:793): pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.494028 kernel: audit: type=1006 audit(1765579941.482:794): pid=5188 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 22:52:21.482000 audit[5188]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7b5b230 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:21.498721 kernel: audit: type=1300 audit(1765579941.482:794): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7b5b230 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:21.482000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:21.500223 kernel: audit: type=1327 audit(1765579941.482:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:21.502360 systemd-logind[1556]: New session 14 of user core. Dec 12 22:52:21.511688 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 22:52:21.512000 audit[5188]: USER_START pid=5188 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.518573 kernel: audit: type=1105 audit(1765579941.512:795): pid=5188 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.517000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.521542 kernel: audit: type=1103 audit(1765579941.517:796): pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.640571 sshd[5192]: Connection closed by 10.0.0.1 port 59600 Dec 12 22:52:21.641130 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:21.642000 audit[5188]: USER_END pid=5188 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.650243 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:59600.service: Deactivated successfully. Dec 12 22:52:21.642000 audit[5188]: CRED_DISP pid=5188 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.653384 kernel: audit: type=1106 audit(1765579941.642:797): pid=5188 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.653465 kernel: audit: type=1104 audit(1765579941.642:798): pid=5188 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:21.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.28:22-10.0.0.1:59600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:21.653643 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 22:52:21.656379 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Dec 12 22:52:21.657578 systemd-logind[1556]: Removed session 14. Dec 12 22:52:21.751630 containerd[1577]: time="2025-12-12T22:52:21.751236829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 22:52:21.961426 containerd[1577]: time="2025-12-12T22:52:21.961315099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:21.962285 containerd[1577]: time="2025-12-12T22:52:21.962248379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 22:52:21.962285 containerd[1577]: time="2025-12-12T22:52:21.962311265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:21.962528 kubelet[2729]: E1212 22:52:21.962458 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 22:52:21.962834 kubelet[2729]: E1212 22:52:21.962548 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 22:52:21.962834 kubelet[2729]: E1212 22:52:21.962687 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f2jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6689b476fb-cz8sh_calico-system(4785c37b-156a-4faf-8cfd-f73b6f7355f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:21.963990 kubelet[2729]: E1212 22:52:21.963942 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:52:24.743240 containerd[1577]: time="2025-12-12T22:52:24.743132654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 22:52:24.935666 containerd[1577]: time="2025-12-12T22:52:24.935570878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:24.936716 containerd[1577]: time="2025-12-12T22:52:24.936631165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 22:52:24.936716 containerd[1577]: time="2025-12-12T22:52:24.936680369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:24.936885 kubelet[2729]: E1212 22:52:24.936848 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:24.937417 kubelet[2729]: E1212 22:52:24.936898 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:24.937417 kubelet[2729]: E1212 22:52:24.937116 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl4cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756859cb7d-lb748_calico-apiserver(22da6c49-6a9b-4270-91c0-2f3cf459c08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:24.937582 containerd[1577]: time="2025-12-12T22:52:24.937285299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 22:52:24.939516 kubelet[2729]: E1212 22:52:24.939353 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:52:25.105030 containerd[1577]: time="2025-12-12T22:52:25.104887969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:25.106083 containerd[1577]: time="2025-12-12T22:52:25.106016700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 22:52:25.106163 containerd[1577]: time="2025-12-12T22:52:25.106051543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:25.106300 kubelet[2729]: E1212 22:52:25.106240 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:25.106354 kubelet[2729]: E1212 22:52:25.106306 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:25.106960 kubelet[2729]: E1212 22:52:25.106449 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5h9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756859cb7d-ctm57_calico-apiserver(a816068e-6aeb-4536-8ece-56ad68b4e384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:25.107662 kubelet[2729]: E1212 22:52:25.107618 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:52:26.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.28:22-10.0.0.1:59614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:26.665508 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:59614.service - OpenSSH per-connection server daemon (10.0.0.1:59614). Dec 12 22:52:26.672465 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 22:52:26.672591 kernel: audit: type=1130 audit(1765579946.664:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.28:22-10.0.0.1:59614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:26.731643 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 59614 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:26.731000 audit[5213]: USER_ACCT pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.734000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.737376 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:26.738424 kernel: audit: type=1101 audit(1765579946.731:801): pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.738551 kernel: audit: type=1103 audit(1765579946.734:802): pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.740885 kernel: audit: type=1006 audit(1765579946.736:803): pid=5213 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 22:52:26.736000 audit[5213]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff81c6c50 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:26.749690 containerd[1577]: time="2025-12-12T22:52:26.749644786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 22:52:26.751978 kernel: audit: type=1300 audit(1765579946.736:803): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff81c6c50 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:26.752072 kubelet[2729]: E1212 22:52:26.751886 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669f8df78-7hr8p" podUID="798aee2d-22cd-4354-84e3-047cb77c02aa" Dec 12 22:52:26.736000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:26.753229 systemd-logind[1556]: New session 15 of user core. Dec 12 22:52:26.754782 kernel: audit: type=1327 audit(1765579946.736:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:26.758730 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 22:52:26.761000 audit[5213]: USER_START pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.766536 kernel: audit: type=1105 audit(1765579946.761:804): pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.766000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.772564 kernel: audit: type=1103 audit(1765579946.766:805): pid=5217 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.890493 sshd[5217]: Connection closed by 10.0.0.1 port 59614 Dec 12 22:52:26.891021 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:26.891000 audit[5213]: USER_END pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.895023 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:59614.service: Deactivated successfully. Dec 12 22:52:26.891000 audit[5213]: CRED_DISP pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.898126 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 22:52:26.899333 kernel: audit: type=1106 audit(1765579946.891:806): pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.899405 kernel: audit: type=1104 audit(1765579946.891:807): pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:26.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.28:22-10.0.0.1:59614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:26.899890 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Dec 12 22:52:26.900781 systemd-logind[1556]: Removed session 15. Dec 12 22:52:26.948341 containerd[1577]: time="2025-12-12T22:52:26.948210421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:26.950940 containerd[1577]: time="2025-12-12T22:52:26.950885554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 22:52:26.951050 containerd[1577]: time="2025-12-12T22:52:26.950993402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:26.951228 kubelet[2729]: E1212 22:52:26.951192 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 22:52:26.951270 kubelet[2729]: E1212 22:52:26.951242 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 22:52:26.951394 kubelet[2729]: E1212 22:52:26.951358 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:26.953344 containerd[1577]: time="2025-12-12T22:52:26.953290625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 22:52:27.146683 containerd[1577]: time="2025-12-12T22:52:27.146619933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:27.147670 containerd[1577]: time="2025-12-12T22:52:27.147635932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 22:52:27.147748 containerd[1577]: time="2025-12-12T22:52:27.147700338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:27.147887 kubelet[2729]: E1212 22:52:27.147850 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 22:52:27.147928 kubelet[2729]: E1212 22:52:27.147900 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 22:52:27.148083 kubelet[2729]: E1212 22:52:27.148015 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:27.149254 kubelet[2729]: E1212 22:52:27.149209 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:52:31.904818 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:53234.service - OpenSSH per-connection server daemon (10.0.0.1:53234). Dec 12 22:52:31.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.28:22-10.0.0.1:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:31.911777 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 22:52:31.911846 kernel: audit: type=1130 audit(1765579951.903:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.28:22-10.0.0.1:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:31.968682 kubelet[2729]: E1212 22:52:31.967837 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:31.974000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:31.976420 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 53234 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:31.979223 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:31.974000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:31.983839 kernel: audit: type=1101 audit(1765579951.974:810): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:31.983914 kernel: audit: type=1103 audit(1765579951.974:811): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:31.985637 systemd-logind[1556]: New session 16 of user core. Dec 12 22:52:31.986269 kernel: audit: type=1006 audit(1765579951.974:812): pid=5252 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 22:52:31.974000 audit[5252]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe45d93b0 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:31.991904 kernel: audit: type=1300 audit(1765579951.974:812): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe45d93b0 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:31.974000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:31.993748 kernel: audit: type=1327 audit(1765579951.974:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:31.994738 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 22:52:31.997000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.001000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.006745 kernel: audit: type=1105 audit(1765579951.997:813): pid=5252 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.006816 kernel: audit: type=1103 audit(1765579952.001:814): pid=5268 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.153931 sshd[5268]: Connection closed by 10.0.0.1 port 53234 Dec 12 22:52:32.154224 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:32.153000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.157809 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:53234.service: Deactivated successfully. Dec 12 22:52:32.159629 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 22:52:32.154000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.161719 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Dec 12 22:52:32.162569 kernel: audit: type=1106 audit(1765579952.153:815): pid=5252 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.162624 kernel: audit: type=1104 audit(1765579952.154:816): pid=5252 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:32.162495 systemd-logind[1556]: Removed session 16. Dec 12 22:52:32.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.28:22-10.0.0.1:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:32.743455 kubelet[2729]: E1212 22:52:32.743223 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:52:32.744187 kubelet[2729]: E1212 22:52:32.744146 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:52:36.743094 kubelet[2729]: E1212 22:52:36.743042 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:52:37.169195 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:53242.service - OpenSSH per-connection server daemon (10.0.0.1:53242). Dec 12 22:52:37.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.28:22-10.0.0.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:37.169968 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 22:52:37.170018 kernel: audit: type=1130 audit(1765579957.168:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.28:22-10.0.0.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:37.231000 audit[5287]: USER_ACCT pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.235446 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 53242 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:37.235833 kernel: audit: type=1101 audit(1765579957.231:819): pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.236000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.237429 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:37.241277 kernel: audit: type=1103 audit(1765579957.236:820): pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.241333 kernel: audit: type=1006 audit(1765579957.236:821): pid=5287 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 22:52:37.236000 audit[5287]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8cd0490 a2=3 a3=0 items=0 ppid=1 pid=5287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:37.245080 kernel: audit: type=1300 audit(1765579957.236:821): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8cd0490 a2=3 a3=0 items=0 ppid=1 pid=5287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:37.236000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:37.246535 kernel: audit: type=1327 audit(1765579957.236:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:37.247056 systemd-logind[1556]: New session 17 of user core. Dec 12 22:52:37.256724 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 22:52:37.260000 audit[5287]: USER_START pid=5287 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.264000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.268184 kernel: audit: type=1105 audit(1765579957.260:822): pid=5287 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.268263 kernel: audit: type=1103 audit(1765579957.264:823): pid=5291 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.391557 sshd[5291]: Connection closed by 10.0.0.1 port 53242 Dec 12 22:52:37.392117 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:37.393000 audit[5287]: USER_END pid=5287 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.393000 audit[5287]: CRED_DISP pid=5287 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.403539 kernel: audit: type=1106 audit(1765579957.393:824): pid=5287 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.403623 kernel: audit: type=1104 audit(1765579957.393:825): pid=5287 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.408118 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:53242.service: Deactivated successfully. Dec 12 22:52:37.409808 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 22:52:37.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.28:22-10.0.0.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:37.414811 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Dec 12 22:52:37.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.28:22-10.0.0.1:53252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:37.421044 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:53252.service - OpenSSH per-connection server daemon (10.0.0.1:53252). Dec 12 22:52:37.423939 systemd-logind[1556]: Removed session 17. Dec 12 22:52:37.489000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.490636 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 53252 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:37.490000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.491000 audit[5305]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe9fbfa0 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:37.491000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:37.492275 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:37.497694 systemd-logind[1556]: New session 18 of user core. Dec 12 22:52:37.506734 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 22:52:37.508000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.509000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.666317 sshd[5309]: Connection closed by 10.0.0.1 port 53252 Dec 12 22:52:37.667309 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:37.668000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.668000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.673891 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:53252.service: Deactivated successfully. Dec 12 22:52:37.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.28:22-10.0.0.1:53252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:37.677573 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 22:52:37.678762 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Dec 12 22:52:37.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.28:22-10.0.0.1:53258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:37.681792 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:53258.service - OpenSSH per-connection server daemon (10.0.0.1:53258). Dec 12 22:52:37.682641 systemd-logind[1556]: Removed session 18. Dec 12 22:52:37.739000 audit[5321]: USER_ACCT pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.740299 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 53258 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:37.740000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.740000 audit[5321]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff757d2e0 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:37.740000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:37.741943 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:37.747267 systemd-logind[1556]: New session 19 of user core. Dec 12 22:52:37.760967 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 22:52:37.764000 audit[5321]: USER_START pid=5321 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:37.769000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.327000 audit[5341]: NETFILTER_CFG table=filter:136 family=2 entries=26 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:38.327000 audit[5341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd6ddc060 a2=0 a3=1 items=0 ppid=2887 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:38.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:38.333000 audit[5341]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:38.333000 audit[5341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd6ddc060 a2=0 a3=1 items=0 ppid=2887 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:38.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:38.342064 sshd[5325]: Connection closed by 10.0.0.1 port 53258 Dec 12 22:52:38.342550 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:38.344000 audit[5321]: USER_END pid=5321 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.344000 audit[5321]: CRED_DISP pid=5321 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.352000 audit[5344]: NETFILTER_CFG table=filter:138 family=2 entries=38 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:38.352000 audit[5344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffde541f70 a2=0 a3=1 items=0 ppid=2887 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:38.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:38.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.28:22-10.0.0.1:53258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:38.353783 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:53258.service: Deactivated successfully. Dec 12 22:52:38.357970 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 22:52:38.359933 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Dec 12 22:52:38.359000 audit[5344]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:38.359000 audit[5344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffde541f70 a2=0 a3=1 items=0 ppid=2887 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:38.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:38.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.28:22-10.0.0.1:53270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:38.365136 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:53270.service - OpenSSH per-connection server daemon (10.0.0.1:53270). Dec 12 22:52:38.367052 systemd-logind[1556]: Removed session 19. Dec 12 22:52:38.436000 audit[5348]: USER_ACCT pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.437249 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 53270 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:38.437000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.437000 audit[5348]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9144e60 a2=3 a3=0 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:38.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:38.438848 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:38.443603 systemd-logind[1556]: New session 20 of user core. Dec 12 22:52:38.452691 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 22:52:38.454000 audit[5348]: USER_START pid=5348 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.456000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.693547 sshd[5352]: Connection closed by 10.0.0.1 port 53270 Dec 12 22:52:38.694099 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:38.695000 audit[5348]: USER_END pid=5348 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.695000 audit[5348]: CRED_DISP pid=5348 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.709559 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:53270.service: Deactivated successfully. Dec 12 22:52:38.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.28:22-10.0.0.1:53270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:38.713147 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 22:52:38.714675 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Dec 12 22:52:38.718256 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:53284.service - OpenSSH per-connection server daemon (10.0.0.1:53284). Dec 12 22:52:38.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.28:22-10.0.0.1:53284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:38.719658 systemd-logind[1556]: Removed session 20. Dec 12 22:52:38.745952 kubelet[2729]: E1212 22:52:38.745911 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:52:38.746420 containerd[1577]: time="2025-12-12T22:52:38.746024061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 22:52:38.785000 audit[5364]: USER_ACCT pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.786088 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 53284 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:38.786000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.786000 audit[5364]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd98f5490 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:38.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:38.787755 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:38.792015 systemd-logind[1556]: New session 21 of user core. Dec 12 22:52:38.801728 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 22:52:38.803000 audit[5364]: USER_START pid=5364 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.805000 audit[5368]: CRED_ACQ pid=5368 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:38.956748 containerd[1577]: time="2025-12-12T22:52:38.956619964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:39.022367 containerd[1577]: time="2025-12-12T22:52:39.022293068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 22:52:39.022519 containerd[1577]: time="2025-12-12T22:52:39.022311746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:39.022599 kubelet[2729]: E1212 22:52:39.022566 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 22:52:39.022653 kubelet[2729]: E1212 22:52:39.022609 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 22:52:39.022763 kubelet[2729]: E1212 22:52:39.022712 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d7e34ffea647f9980c1d6396170425,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5psvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-669f8df78-7hr8p_calico-system(798aee2d-22cd-4354-84e3-047cb77c02aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:39.024968 containerd[1577]: time="2025-12-12T22:52:39.024867172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 22:52:39.048857 sshd[5368]: Connection closed by 10.0.0.1 port 53284 Dec 12 22:52:39.048726 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:39.049000 audit[5364]: USER_END pid=5364 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:39.049000 audit[5364]: CRED_DISP pid=5364 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:39.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.28:22-10.0.0.1:53284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:39.054741 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:53284.service: Deactivated successfully. Dec 12 22:52:39.056776 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 22:52:39.057727 systemd-logind[1556]: Session 21 logged out. Waiting for processes to exit. Dec 12 22:52:39.059170 systemd-logind[1556]: Removed session 21. Dec 12 22:52:39.249945 containerd[1577]: time="2025-12-12T22:52:39.249791563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:39.251037 containerd[1577]: time="2025-12-12T22:52:39.250994161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 22:52:39.251180 containerd[1577]: time="2025-12-12T22:52:39.251106874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:39.251750 kubelet[2729]: E1212 22:52:39.251677 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 22:52:39.251807 kubelet[2729]: E1212 22:52:39.251757 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 22:52:39.251905 kubelet[2729]: E1212 22:52:39.251867 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5psvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-669f8df78-7hr8p_calico-system(798aee2d-22cd-4354-84e3-047cb77c02aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:39.253224 kubelet[2729]: E1212 22:52:39.253097 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669f8df78-7hr8p" podUID="798aee2d-22cd-4354-84e3-047cb77c02aa" Dec 12 22:52:42.743273 kubelet[2729]: E1212 22:52:42.743133 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:52:42.907000 audit[5385]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:42.912424 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 22:52:42.912569 kernel: audit: type=1325 audit(1765579962.907:867): table=filter:140 family=2 entries=26 op=nft_register_rule pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:42.907000 audit[5385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff196ba20 a2=0 a3=1 items=0 ppid=2887 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:42.916612 kernel: audit: type=1300 audit(1765579962.907:867): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff196ba20 a2=0 a3=1 items=0 ppid=2887 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:42.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:42.918548 kernel: audit: type=1327 audit(1765579962.907:867): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:42.921000 audit[5385]: NETFILTER_CFG table=nat:141 family=2 entries=104 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:42.921000 audit[5385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff196ba20 a2=0 a3=1 items=0 ppid=2887 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:42.929752 kernel: audit: type=1325 audit(1765579962.921:868): table=nat:141 family=2 entries=104 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 22:52:42.929811 kernel: audit: type=1300 audit(1765579962.921:868): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff196ba20 a2=0 a3=1 items=0 ppid=2887 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:42.929843 kernel: audit: type=1327 audit(1765579962.921:868): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:42.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 22:52:43.743038 kubelet[2729]: E1212 22:52:43.742994 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:43.743184 kubelet[2729]: E1212 22:52:43.743063 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:44.061452 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:38288.service - OpenSSH per-connection server daemon (10.0.0.1:38288). Dec 12 22:52:44.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.28:22-10.0.0.1:38288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:44.066633 kernel: audit: type=1130 audit(1765579964.060:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.28:22-10.0.0.1:38288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:44.159019 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 38288 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:44.157000 audit[5388]: USER_ACCT pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.163033 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:44.160000 audit[5388]: CRED_ACQ pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.170717 kernel: audit: type=1101 audit(1765579964.157:870): pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.170822 kernel: audit: type=1103 audit(1765579964.160:871): pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.174563 kernel: audit: type=1006 audit(1765579964.160:872): pid=5388 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 22:52:44.174248 systemd-logind[1556]: New session 22 of user core. Dec 12 22:52:44.160000 audit[5388]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2d1ec80 a2=3 a3=0 items=0 ppid=1 pid=5388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:44.160000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:44.181992 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 22:52:44.184000 audit[5388]: USER_START pid=5388 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.187000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.283105 sshd[5392]: Connection closed by 10.0.0.1 port 38288 Dec 12 22:52:44.283398 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:44.283000 audit[5388]: USER_END pid=5388 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.283000 audit[5388]: CRED_DISP pid=5388 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:44.288343 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:38288.service: Deactivated successfully. Dec 12 22:52:44.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.28:22-10.0.0.1:38288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:44.290425 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 22:52:44.291892 systemd-logind[1556]: Session 22 logged out. Waiting for processes to exit. Dec 12 22:52:44.294081 systemd-logind[1556]: Removed session 22. Dec 12 22:52:44.743376 containerd[1577]: time="2025-12-12T22:52:44.743231719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 22:52:44.924561 containerd[1577]: time="2025-12-12T22:52:44.924422943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:44.925602 containerd[1577]: time="2025-12-12T22:52:44.925546125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:44.925671 containerd[1577]: time="2025-12-12T22:52:44.925601082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 22:52:44.925849 kubelet[2729]: E1212 22:52:44.925804 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 22:52:44.926114 kubelet[2729]: E1212 22:52:44.925860 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 22:52:44.926114 kubelet[2729]: E1212 22:52:44.925985 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f2jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6689b476fb-cz8sh_calico-system(4785c37b-156a-4faf-8cfd-f73b6f7355f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:44.927457 kubelet[2729]: E1212 22:52:44.927406 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4" Dec 12 22:52:47.745370 containerd[1577]: time="2025-12-12T22:52:47.745306036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 22:52:47.939371 containerd[1577]: time="2025-12-12T22:52:47.939298226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:47.940639 containerd[1577]: time="2025-12-12T22:52:47.940590851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 22:52:47.940749 containerd[1577]: time="2025-12-12T22:52:47.940686047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:47.941224 kubelet[2729]: E1212 22:52:47.941030 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 22:52:47.941224 kubelet[2729]: E1212 22:52:47.941075 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 22:52:47.941709 kubelet[2729]: E1212 22:52:47.941206 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p526l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls8tz_calico-system(0e402111-6f81-4248-9211-701497715292): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:47.942947 kubelet[2729]: E1212 22:52:47.942912 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls8tz" podUID="0e402111-6f81-4248-9211-701497715292" Dec 12 22:52:49.295416 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:38294.service - OpenSSH per-connection server daemon (10.0.0.1:38294). Dec 12 22:52:49.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.28:22-10.0.0.1:38294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:49.296577 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 22:52:49.296624 kernel: audit: type=1130 audit(1765579969.295:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.28:22-10.0.0.1:38294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:49.357000 audit[5406]: USER_ACCT pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.361568 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 38294 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:49.361000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.365974 kernel: audit: type=1101 audit(1765579969.357:879): pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.366025 kernel: audit: type=1103 audit(1765579969.361:880): pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.363829 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:49.368441 kernel: audit: type=1006 audit(1765579969.361:881): pid=5406 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 22:52:49.361000 audit[5406]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa89b450 a2=3 a3=0 items=0 ppid=1 pid=5406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:49.372484 kernel: audit: type=1300 audit(1765579969.361:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa89b450 a2=3 a3=0 items=0 ppid=1 pid=5406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:49.361000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:49.374693 kernel: audit: type=1327 audit(1765579969.361:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:49.375892 systemd-logind[1556]: New session 23 of user core. Dec 12 22:52:49.384771 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 22:52:49.386000 audit[5406]: USER_START pid=5406 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.391589 kernel: audit: type=1105 audit(1765579969.386:882): pid=5406 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.391000 audit[5410]: CRED_ACQ pid=5410 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.395560 kernel: audit: type=1103 audit(1765579969.391:883): pid=5410 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.473550 sshd[5410]: Connection closed by 10.0.0.1 port 38294 Dec 12 22:52:49.473853 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:49.474000 audit[5406]: USER_END pid=5406 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.477204 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:38294.service: Deactivated successfully. Dec 12 22:52:49.474000 audit[5406]: CRED_DISP pid=5406 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.479435 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 22:52:49.481814 kernel: audit: type=1106 audit(1765579969.474:884): pid=5406 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.481882 kernel: audit: type=1104 audit(1765579969.474:885): pid=5406 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:49.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.28:22-10.0.0.1:38294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:49.482089 systemd-logind[1556]: Session 23 logged out. Waiting for processes to exit. Dec 12 22:52:49.485098 systemd-logind[1556]: Removed session 23. Dec 12 22:52:49.746963 containerd[1577]: time="2025-12-12T22:52:49.746690479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 22:52:49.952962 containerd[1577]: time="2025-12-12T22:52:49.952834296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:49.953787 containerd[1577]: time="2025-12-12T22:52:49.953751182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 22:52:49.953892 containerd[1577]: time="2025-12-12T22:52:49.953835219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:49.954027 kubelet[2729]: E1212 22:52:49.953962 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:49.954299 kubelet[2729]: E1212 22:52:49.954039 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:49.954299 kubelet[2729]: E1212 22:52:49.954156 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl4cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756859cb7d-lb748_calico-apiserver(22da6c49-6a9b-4270-91c0-2f3cf459c08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:49.955396 kubelet[2729]: E1212 22:52:49.955366 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-lb748" podUID="22da6c49-6a9b-4270-91c0-2f3cf459c08b" Dec 12 22:52:50.743812 kubelet[2729]: E1212 22:52:50.743757 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669f8df78-7hr8p" podUID="798aee2d-22cd-4354-84e3-047cb77c02aa" Dec 12 22:52:53.745943 containerd[1577]: time="2025-12-12T22:52:53.745904408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 22:52:53.944305 containerd[1577]: time="2025-12-12T22:52:53.944256108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:53.945282 containerd[1577]: time="2025-12-12T22:52:53.945113205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 22:52:53.945282 containerd[1577]: time="2025-12-12T22:52:53.945203403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:53.945562 kubelet[2729]: E1212 22:52:53.945511 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:53.945945 kubelet[2729]: E1212 22:52:53.945613 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 22:52:53.947609 kubelet[2729]: E1212 22:52:53.946114 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5h9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756859cb7d-ctm57_calico-apiserver(a816068e-6aeb-4536-8ece-56ad68b4e384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:53.948758 kubelet[2729]: E1212 22:52:53.948711 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756859cb7d-ctm57" podUID="a816068e-6aeb-4536-8ece-56ad68b4e384" Dec 12 22:52:54.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.28:22-10.0.0.1:48906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:54.489779 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:48906.service - OpenSSH per-connection server daemon (10.0.0.1:48906). Dec 12 22:52:54.493257 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 22:52:54.493323 kernel: audit: type=1130 audit(1765579974.489:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.28:22-10.0.0.1:48906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:54.548000 audit[5433]: USER_ACCT pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.549069 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 48906 ssh2: RSA SHA256:XvtofJ234oL+USFgK9vTb62WbJUYBCr2y6ahX4gF+sA Dec 12 22:52:54.551000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.553063 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 22:52:54.555498 kernel: audit: type=1101 audit(1765579974.548:888): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.555633 kernel: audit: type=1103 audit(1765579974.551:889): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.557981 kernel: audit: type=1006 audit(1765579974.551:890): pid=5433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 22:52:54.551000 audit[5433]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddb63f40 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:54.561894 kernel: audit: type=1300 audit(1765579974.551:890): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddb63f40 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 22:52:54.561971 kernel: audit: type=1327 audit(1765579974.551:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:54.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 22:52:54.566852 systemd-logind[1556]: New session 24 of user core. Dec 12 22:52:54.576780 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 22:52:54.579000 audit[5433]: USER_START pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.583000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.587061 kernel: audit: type=1105 audit(1765579974.579:891): pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.587116 kernel: audit: type=1103 audit(1765579974.583:892): pid=5437 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.718147 sshd[5437]: Connection closed by 10.0.0.1 port 48906 Dec 12 22:52:54.718432 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Dec 12 22:52:54.719000 audit[5433]: USER_END pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.722851 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:48906.service: Deactivated successfully. Dec 12 22:52:54.719000 audit[5433]: CRED_DISP pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.724964 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 22:52:54.726713 systemd-logind[1556]: Session 24 logged out. Waiting for processes to exit. Dec 12 22:52:54.727414 kernel: audit: type=1106 audit(1765579974.719:893): pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.727465 kernel: audit: type=1104 audit(1765579974.719:894): pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 22:52:54.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.28:22-10.0.0.1:48906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 22:52:54.727855 systemd-logind[1556]: Removed session 24. Dec 12 22:52:54.752739 containerd[1577]: time="2025-12-12T22:52:54.752573338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 22:52:54.948813 containerd[1577]: time="2025-12-12T22:52:54.948737872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:54.950854 containerd[1577]: time="2025-12-12T22:52:54.950748743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 22:52:54.950948 containerd[1577]: time="2025-12-12T22:52:54.950813382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:54.951139 kubelet[2729]: E1212 22:52:54.951095 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 22:52:54.952068 kubelet[2729]: E1212 22:52:54.951148 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 22:52:54.952068 kubelet[2729]: E1212 22:52:54.951257 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:54.953647 containerd[1577]: time="2025-12-12T22:52:54.953330800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 22:52:55.126784 containerd[1577]: time="2025-12-12T22:52:55.125460242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 22:52:55.127741 containerd[1577]: time="2025-12-12T22:52:55.127634754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 22:52:55.127829 containerd[1577]: time="2025-12-12T22:52:55.127710752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 22:52:55.127932 kubelet[2729]: E1212 22:52:55.127885 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 22:52:55.127986 kubelet[2729]: E1212 22:52:55.127938 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 22:52:55.128091 kubelet[2729]: E1212 22:52:55.128049 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nknqg_calico-system(3344cb69-6333-4eee-b470-ee8fe022cd55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 22:52:55.129512 kubelet[2729]: E1212 22:52:55.129464 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nknqg" podUID="3344cb69-6333-4eee-b470-ee8fe022cd55" Dec 12 22:52:55.744419 kubelet[2729]: E1212 22:52:55.744381 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 22:52:55.745818 kubelet[2729]: E1212 22:52:55.745771 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6689b476fb-cz8sh" podUID="4785c37b-156a-4faf-8cfd-f73b6f7355f4"