Dec 16 12:07:07.303212 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:07:07.303237 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 12:07:07.303245 kernel: KASLR enabled Dec 16 12:07:07.303252 kernel: efi: EFI v2.7 by EDK II Dec 16 12:07:07.303257 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:07:07.303263 kernel: random: crng init done Dec 16 12:07:07.303270 kernel: secureboot: Secure boot disabled Dec 16 12:07:07.303276 kernel: ACPI: Early table checksum verification disabled Dec 16 12:07:07.303284 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:07:07.303290 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:07:07.303297 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303303 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303309 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303315 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303324 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303331 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303338 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303344 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303351 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:07:07.303357 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:07:07.303364 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:07:07.303371 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:07:07.303378 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:07:07.303385 kernel: Zone ranges: Dec 16 12:07:07.303392 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:07:07.303398 kernel: DMA32 empty Dec 16 12:07:07.303404 kernel: Normal empty Dec 16 12:07:07.303411 kernel: Device empty Dec 16 12:07:07.303417 kernel: Movable zone start for each node Dec 16 12:07:07.303424 kernel: Early memory node ranges Dec 16 12:07:07.303430 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:07:07.303437 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:07:07.303443 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:07:07.303450 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:07:07.303458 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:07:07.303464 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:07:07.303471 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:07:07.303477 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:07:07.303483 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:07:07.303490 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:07:07.303500 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:07:07.303507 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:07:07.303514 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:07:07.303521 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:07:07.303528 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:07:07.303535 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:07:07.303542 kernel: psci: probing for conduit method from ACPI. Dec 16 12:07:07.303549 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:07:07.303557 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:07:07.303565 kernel: psci: Trusted OS migration not required Dec 16 12:07:07.303571 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:07:07.303579 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:07:07.303586 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:07:07.303593 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:07:07.303599 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:07:07.303606 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:07:07.303620 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:07:07.303629 kernel: CPU features: detected: Spectre-v4 Dec 16 12:07:07.303636 kernel: CPU features: detected: Spectre-BHB Dec 16 12:07:07.303644 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:07:07.303651 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:07:07.303657 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:07:07.303664 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:07:07.303671 kernel: alternatives: applying boot alternatives Dec 16 12:07:07.303678 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:07:07.303686 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:07:07.303692 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:07:07.303699 kernel: Fallback order for Node 0: 0 Dec 16 12:07:07.303708 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:07:07.303718 kernel: Policy zone: DMA Dec 16 12:07:07.303727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:07:07.303734 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:07:07.303741 kernel: software IO TLB: area num 4. Dec 16 12:07:07.303748 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:07:07.303755 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:07:07.303761 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:07:07.303768 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:07:07.303776 kernel: rcu: RCU event tracing is enabled. Dec 16 12:07:07.303783 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:07:07.303790 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:07:07.303798 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:07:07.303805 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:07:07.303812 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:07:07.303819 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:07:07.303826 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:07:07.303833 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:07:07.303840 kernel: GICv3: 256 SPIs implemented Dec 16 12:07:07.303847 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:07:07.303854 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:07:07.303861 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:07:07.303867 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:07:07.303875 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:07:07.303882 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:07:07.303889 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:07:07.303896 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:07:07.303903 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:07:07.303910 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:07:07.303917 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:07:07.303923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:07:07.303930 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:07:07.303937 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:07:07.303944 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:07:07.303952 kernel: arm-pv: using stolen time PV Dec 16 12:07:07.303960 kernel: Console: colour dummy device 80x25 Dec 16 12:07:07.303967 kernel: ACPI: Core revision 20240827 Dec 16 12:07:07.303974 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:07:07.303982 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:07:07.303989 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:07:07.303996 kernel: landlock: Up and running. Dec 16 12:07:07.304003 kernel: SELinux: Initializing. Dec 16 12:07:07.304011 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:07:07.304028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:07:07.304048 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:07:07.304056 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:07:07.304063 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:07:07.304071 kernel: Remapping and enabling EFI services. Dec 16 12:07:07.304078 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:07:07.304087 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:07:07.304099 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:07:07.304108 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:07:07.304115 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:07:07.304123 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:07:07.304130 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:07:07.304138 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:07:07.304147 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:07:07.304155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:07:07.304162 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:07:07.304169 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:07:07.304177 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:07:07.304184 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:07:07.304192 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:07:07.304201 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:07:07.304208 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:07:07.304216 kernel: SMP: Total of 4 processors activated. Dec 16 12:07:07.304223 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:07:07.304231 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:07:07.304238 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:07:07.304246 kernel: CPU features: detected: Common not Private translations Dec 16 12:07:07.304255 kernel: CPU features: detected: CRC32 instructions Dec 16 12:07:07.304262 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:07:07.304270 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:07:07.304277 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:07:07.304285 kernel: CPU features: detected: Privileged Access Never Dec 16 12:07:07.304292 kernel: CPU features: detected: RAS Extension Support Dec 16 12:07:07.304299 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:07:07.304307 kernel: alternatives: applying system-wide alternatives Dec 16 12:07:07.304316 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:07:07.304324 kernel: Memory: 2450848K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 99104K reserved, 16384K cma-reserved) Dec 16 12:07:07.304331 kernel: devtmpfs: initialized Dec 16 12:07:07.304339 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:07:07.304346 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:07:07.304354 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:07:07.304361 kernel: 0 pages in range for non-PLT usage Dec 16 12:07:07.304370 kernel: 515168 pages in range for PLT usage Dec 16 12:07:07.304377 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:07:07.304385 kernel: SMBIOS 3.0.0 present. Dec 16 12:07:07.304393 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:07:07.304400 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:07:07.304407 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:07:07.304415 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:07:07.304424 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:07:07.304431 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:07:07.304439 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:07:07.304446 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 16 12:07:07.304454 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:07:07.304461 kernel: cpuidle: using governor menu Dec 16 12:07:07.304469 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:07:07.304478 kernel: ASID allocator initialised with 32768 entries Dec 16 12:07:07.304485 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:07:07.304493 kernel: Serial: AMBA PL011 UART driver Dec 16 12:07:07.304500 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:07:07.304508 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:07:07.304515 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:07:07.304523 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:07:07.304530 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:07:07.304539 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:07:07.304546 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:07:07.304554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:07:07.304562 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:07:07.304569 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:07:07.304577 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:07:07.304584 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:07:07.304593 kernel: ACPI: Interpreter enabled Dec 16 12:07:07.304601 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:07:07.304608 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:07:07.304621 kernel: ACPI: CPU0 has been hot-added Dec 16 12:07:07.304629 kernel: ACPI: CPU1 has been hot-added Dec 16 12:07:07.304636 kernel: ACPI: CPU2 has been hot-added Dec 16 12:07:07.304643 kernel: ACPI: CPU3 has been hot-added Dec 16 12:07:07.304653 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:07:07.304660 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:07:07.304668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:07:07.304825 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:07:07.304912 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:07:07.304992 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:07:07.305097 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:07:07.305180 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:07:07.305190 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:07:07.305198 kernel: PCI host bridge to bus 0000:00 Dec 16 12:07:07.305281 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:07:07.305354 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:07:07.305429 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:07:07.305500 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:07:07.305594 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:07:07.305695 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:07:07.305782 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:07:07.305862 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:07:07.305945 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:07:07.306036 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:07:07.306120 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:07:07.306200 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:07:07.306273 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:07:07.306345 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:07:07.306420 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:07:07.306430 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:07:07.306437 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:07:07.306445 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:07:07.306453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:07:07.306460 kernel: iommu: Default domain type: Translated Dec 16 12:07:07.306469 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:07:07.306477 kernel: efivars: Registered efivars operations Dec 16 12:07:07.306485 kernel: vgaarb: loaded Dec 16 12:07:07.306492 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:07:07.306499 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:07:07.306507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:07:07.306515 kernel: pnp: PnP ACPI init Dec 16 12:07:07.306604 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:07:07.306622 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:07:07.306630 kernel: NET: Registered PF_INET protocol family Dec 16 12:07:07.306638 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:07:07.306737 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:07:07.306751 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:07:07.306760 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:07:07.306772 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:07:07.306834 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:07:07.307121 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:07:07.307129 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:07:07.307137 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:07:07.307145 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:07:07.307153 kernel: kvm [1]: HYP mode not available Dec 16 12:07:07.307166 kernel: Initialise system trusted keyrings Dec 16 12:07:07.307174 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:07:07.307182 kernel: Key type asymmetric registered Dec 16 12:07:07.307189 kernel: Asymmetric key parser 'x509' registered Dec 16 12:07:07.307197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:07:07.307205 kernel: io scheduler mq-deadline registered Dec 16 12:07:07.307212 kernel: io scheduler kyber registered Dec 16 12:07:07.307221 kernel: io scheduler bfq registered Dec 16 12:07:07.307229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:07:07.307237 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:07:07.307245 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:07:07.307368 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:07:07.307380 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:07:07.307387 kernel: thunder_xcv, ver 1.0 Dec 16 12:07:07.307398 kernel: thunder_bgx, ver 1.0 Dec 16 12:07:07.307405 kernel: nicpf, ver 1.0 Dec 16 12:07:07.307413 kernel: nicvf, ver 1.0 Dec 16 12:07:07.307506 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:07:07.307584 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:07:06 UTC (1765886826) Dec 16 12:07:07.307594 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:07:07.307604 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:07:07.307612 kernel: watchdog: NMI not fully supported Dec 16 12:07:07.307630 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:07:07.307637 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:07:07.307645 kernel: Segment Routing with IPv6 Dec 16 12:07:07.307652 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:07:07.307660 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:07:07.307667 kernel: Key type dns_resolver registered Dec 16 12:07:07.307677 kernel: registered taskstats version 1 Dec 16 12:07:07.307684 kernel: Loading compiled-in X.509 certificates Dec 16 12:07:07.307691 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 12:07:07.307699 kernel: Demotion targets for Node 0: null Dec 16 12:07:07.307706 kernel: Key type .fscrypt registered Dec 16 12:07:07.307714 kernel: Key type fscrypt-provisioning registered Dec 16 12:07:07.307721 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:07:07.307730 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:07:07.307737 kernel: ima: No architecture policies found Dec 16 12:07:07.307745 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:07:07.307752 kernel: clk: Disabling unused clocks Dec 16 12:07:07.307760 kernel: PM: genpd: Disabling unused power domains Dec 16 12:07:07.307768 kernel: Freeing unused kernel memory: 12480K Dec 16 12:07:07.308091 kernel: Run /init as init process Dec 16 12:07:07.308108 kernel: with arguments: Dec 16 12:07:07.308116 kernel: /init Dec 16 12:07:07.308123 kernel: with environment: Dec 16 12:07:07.308130 kernel: HOME=/ Dec 16 12:07:07.308138 kernel: TERM=linux Dec 16 12:07:07.308274 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:07:07.308358 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 12:07:07.308371 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:07:07.308378 kernel: GPT:16515071 != 27000831 Dec 16 12:07:07.308386 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:07:07.308393 kernel: GPT:16515071 != 27000831 Dec 16 12:07:07.308400 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:07:07.308408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:07:07.308983 kernel: SCSI subsystem initialized Dec 16 12:07:07.309000 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:07:07.309008 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:07:07.309087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:07:07.309100 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:07:07.309114 kernel: raid6: neonx8 gen() 15495 MB/s Dec 16 12:07:07.309122 kernel: raid6: neonx4 gen() 15669 MB/s Dec 16 12:07:07.309135 kernel: raid6: neonx2 gen() 12332 MB/s Dec 16 12:07:07.309143 kernel: raid6: neonx1 gen() 10419 MB/s Dec 16 12:07:07.309216 kernel: raid6: int64x8 gen() 6783 MB/s Dec 16 12:07:07.309230 kernel: raid6: int64x4 gen() 7258 MB/s Dec 16 12:07:07.309238 kernel: raid6: int64x2 gen() 6049 MB/s Dec 16 12:07:07.309246 kernel: raid6: int64x1 gen() 5052 MB/s Dec 16 12:07:07.309253 kernel: raid6: using algorithm neonx4 gen() 15669 MB/s Dec 16 12:07:07.309264 kernel: raid6: .... xor() 12336 MB/s, rmw enabled Dec 16 12:07:07.309273 kernel: raid6: using neon recovery algorithm Dec 16 12:07:07.309280 kernel: xor: measuring software checksum speed Dec 16 12:07:07.309288 kernel: 8regs : 21596 MB/sec Dec 16 12:07:07.309295 kernel: 32regs : 20854 MB/sec Dec 16 12:07:07.309303 kernel: arm64_neon : 28089 MB/sec Dec 16 12:07:07.309311 kernel: xor: using function: arm64_neon (28089 MB/sec) Dec 16 12:07:07.309318 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:07:07.309327 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (205) Dec 16 12:07:07.309335 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 12:07:07.309343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:07:07.309350 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:07:07.309358 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:07:07.309366 kernel: loop: module loaded Dec 16 12:07:07.309373 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 12:07:07.309382 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:07:07.309390 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:07:07.309401 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:07:07.309410 systemd[1]: Detected virtualization kvm. Dec 16 12:07:07.309418 systemd[1]: Detected architecture arm64. Dec 16 12:07:07.309427 systemd[1]: Running in initrd. Dec 16 12:07:07.309435 systemd[1]: No hostname configured, using default hostname. Dec 16 12:07:07.309444 systemd[1]: Hostname set to . Dec 16 12:07:07.309452 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:07:07.309460 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:07:07.309468 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:07:07.309476 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:07:07.309486 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:07:07.309495 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:07:07.309503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:07:07.309512 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:07:07.309520 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:07:07.309530 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:07:07.309538 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:07:07.309546 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:07:07.309554 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:07:07.309562 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:07:07.309570 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:07:07.309579 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:07:07.309588 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:07:07.309597 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:07:07.309605 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:07:07.309619 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:07:07.309636 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:07:07.309646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:07:07.309655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:07:07.309665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:07:07.309673 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:07:07.309682 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:07:07.309690 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:07:07.309699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:07:07.309709 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:07:07.309717 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:07:07.309726 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:07:07.309734 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:07:07.309743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:07:07.309753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:07:07.309761 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:07:07.309770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:07:07.309778 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:07:07.309787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:07:07.309796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:07:07.309834 systemd-journald[346]: Collecting audit messages is enabled. Dec 16 12:07:07.309854 kernel: Bridge firewalling registered Dec 16 12:07:07.309866 systemd-journald[346]: Journal started Dec 16 12:07:07.309884 systemd-journald[346]: Runtime Journal (/run/log/journal/c653a8e0267d465d971c9bbce86d4c4f) is 6M, max 48.5M, 42.4M free. Dec 16 12:07:07.303664 systemd-modules-load[348]: Inserted module 'br_netfilter' Dec 16 12:07:07.315208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:07:07.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.318043 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:07:07.318064 kernel: audit: type=1130 audit(1765886827.316:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.324457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:07:07.329214 kernel: audit: type=1130 audit(1765886827.321:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.329240 kernel: audit: type=1130 audit(1765886827.325:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.329210 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:07:07.334479 kernel: audit: type=1130 audit(1765886827.330:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.334335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:07:07.336171 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:07:07.338444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:07:07.347576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:07:07.356725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:07:07.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.358833 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:07:07.366459 kernel: audit: type=1130 audit(1765886827.358:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.366483 kernel: audit: type=1334 audit(1765886827.359:7): prog-id=6 op=LOAD Dec 16 12:07:07.359000 audit: BPF prog-id=6 op=LOAD Dec 16 12:07:07.362441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:07:07.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.367824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:07:07.373138 kernel: audit: type=1130 audit(1765886827.369:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.369307 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:07:07.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.379054 kernel: audit: type=1130 audit(1765886827.375:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.382236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:07:07.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.388035 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:07:07.390824 kernel: audit: type=1130 audit(1765886827.386:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.411512 dracut-cmdline[388]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:07:07.425085 systemd-resolved[381]: Positive Trust Anchors: Dec 16 12:07:07.425113 systemd-resolved[381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:07:07.425117 systemd-resolved[381]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:07:07.425149 systemd-resolved[381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:07:07.452185 systemd-resolved[381]: Defaulting to hostname 'linux'. Dec 16 12:07:07.453006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:07:07.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.454276 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:07:07.492046 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:07:07.500034 kernel: iscsi: registered transport (tcp) Dec 16 12:07:07.514041 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:07:07.514097 kernel: QLogic iSCSI HBA Driver Dec 16 12:07:07.534950 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:07:07.555177 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:07:07.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.557996 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:07:07.605004 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:07:07.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.607603 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:07:07.609327 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:07:07.653257 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:07:07.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.655000 audit: BPF prog-id=7 op=LOAD Dec 16 12:07:07.655000 audit: BPF prog-id=8 op=LOAD Dec 16 12:07:07.655900 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:07:07.685915 systemd-udevd[626]: Using default interface naming scheme 'v257'. Dec 16 12:07:07.695325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:07:07.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.699202 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:07:07.716068 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:07:07.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.718000 audit: BPF prog-id=9 op=LOAD Dec 16 12:07:07.719088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:07:07.726584 dracut-pre-trigger[705]: rd.md=0: removing MD RAID activation Dec 16 12:07:07.749474 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:07:07.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.751977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:07:07.761259 systemd-networkd[729]: lo: Link UP Dec 16 12:07:07.761267 systemd-networkd[729]: lo: Gained carrier Dec 16 12:07:07.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.761860 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:07:07.763350 systemd[1]: Reached target network.target - Network. Dec 16 12:07:07.809220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:07:07.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.812250 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:07:07.867412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:07:07.880541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:07:07.887507 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:07:07.896027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:07:07.899580 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:07:07.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.906392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:07:07.906493 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:07:07.907600 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:07:07.911145 systemd-networkd[729]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:07:07.911149 systemd-networkd[729]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:07:07.912116 systemd-networkd[729]: eth0: Link UP Dec 16 12:07:07.912297 systemd-networkd[729]: eth0: Gained carrier Dec 16 12:07:07.912306 systemd-networkd[729]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:07:07.912938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:07:07.924456 disk-uuid[799]: Primary Header is updated. Dec 16 12:07:07.924456 disk-uuid[799]: Secondary Entries is updated. Dec 16 12:07:07.924456 disk-uuid[799]: Secondary Header is updated. Dec 16 12:07:07.932109 systemd-networkd[729]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:07:07.933536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:07:07.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.938137 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:07:07.939769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:07:07.944137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:07:07.948268 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:07:07.953590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:07:07.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:07.978996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:07:07.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:08.957967 disk-uuid[800]: Warning: The kernel is still using the old partition table. Dec 16 12:07:08.957967 disk-uuid[800]: The new table will be used at the next reboot or after you Dec 16 12:07:08.957967 disk-uuid[800]: run partprobe(8) or kpartx(8) Dec 16 12:07:08.957967 disk-uuid[800]: The operation has completed successfully. Dec 16 12:07:08.967087 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:07:08.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:08.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:08.967192 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:07:08.969445 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:07:09.006912 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (828) Dec 16 12:07:09.006957 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:07:09.008502 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:07:09.011553 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:07:09.011576 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:07:09.018030 kernel: BTRFS info (device vda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:07:09.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.019109 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:07:09.022175 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:07:09.109911 ignition[847]: Ignition 2.24.0 Dec 16 12:07:09.109925 ignition[847]: Stage: fetch-offline Dec 16 12:07:09.109965 ignition[847]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:07:09.109974 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:07:09.110152 ignition[847]: parsed url from cmdline: "" Dec 16 12:07:09.110156 ignition[847]: no config URL provided Dec 16 12:07:09.110161 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:07:09.110170 ignition[847]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:07:09.110206 ignition[847]: op(1): [started] loading QEMU firmware config module Dec 16 12:07:09.110210 ignition[847]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:07:09.117182 ignition[847]: op(1): [finished] loading QEMU firmware config module Dec 16 12:07:09.160919 ignition[847]: parsing config with SHA512: a316dd7b3090c1424ca828ec2293eb6dfaffd45aa0239a63dc94e538e63ff0d7ec7e4d2dbbce3e1687df5e77c2e854f06333ec6403af04d4b27167da60be2162 Dec 16 12:07:09.166536 unknown[847]: fetched base config from "system" Dec 16 12:07:09.167465 unknown[847]: fetched user config from "qemu" Dec 16 12:07:09.167856 ignition[847]: fetch-offline: fetch-offline passed Dec 16 12:07:09.167919 ignition[847]: Ignition finished successfully Dec 16 12:07:09.170406 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:07:09.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.172396 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:07:09.175159 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:07:09.203324 ignition[861]: Ignition 2.24.0 Dec 16 12:07:09.203342 ignition[861]: Stage: kargs Dec 16 12:07:09.203483 ignition[861]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:07:09.203494 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:07:09.204268 ignition[861]: kargs: kargs passed Dec 16 12:07:09.204315 ignition[861]: Ignition finished successfully Dec 16 12:07:09.209122 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:07:09.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.211869 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:07:09.234050 ignition[868]: Ignition 2.24.0 Dec 16 12:07:09.234064 ignition[868]: Stage: disks Dec 16 12:07:09.234222 ignition[868]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:07:09.234230 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:07:09.236751 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:07:09.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.235010 ignition[868]: disks: disks passed Dec 16 12:07:09.239225 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:07:09.235070 ignition[868]: Ignition finished successfully Dec 16 12:07:09.240890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:07:09.242935 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:07:09.245035 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:07:09.246969 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:07:09.250150 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:07:09.294052 systemd-fsck[877]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 12:07:09.299316 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:07:09.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.302118 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:07:09.363155 systemd-networkd[729]: eth0: Gained IPv6LL Dec 16 12:07:09.365808 kernel: EXT4-fs (vda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 12:07:09.366387 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:07:09.367490 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:07:09.370479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:07:09.372301 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:07:09.373479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:07:09.373514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:07:09.373560 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:07:09.388025 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:07:09.391837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:07:09.394719 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Dec 16 12:07:09.394742 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:07:09.396039 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:07:09.399043 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:07:09.399073 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:07:09.400748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:07:09.526845 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:07:09.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.529504 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:07:09.532257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:07:09.552350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:07:09.554806 kernel: BTRFS info (device vda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:07:09.572161 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:07:09.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.584822 ignition[983]: INFO : Ignition 2.24.0 Dec 16 12:07:09.584822 ignition[983]: INFO : Stage: mount Dec 16 12:07:09.587643 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:07:09.587643 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:07:09.587643 ignition[983]: INFO : mount: mount passed Dec 16 12:07:09.587643 ignition[983]: INFO : Ignition finished successfully Dec 16 12:07:09.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:09.590093 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:07:09.592909 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:07:10.367397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:07:10.398035 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Dec 16 12:07:10.400523 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:07:10.400544 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:07:10.403645 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:07:10.403694 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:07:10.405315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:07:10.433669 ignition[1011]: INFO : Ignition 2.24.0 Dec 16 12:07:10.433669 ignition[1011]: INFO : Stage: files Dec 16 12:07:10.435589 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:07:10.435589 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:07:10.435589 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:07:10.439855 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:07:10.439855 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:07:10.439855 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:07:10.439855 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:07:10.439855 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:07:10.439783 unknown[1011]: wrote ssh authorized keys file for user: core Dec 16 12:07:10.449534 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:07:10.449534 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:07:10.470564 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:07:10.576802 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:07:10.576802 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:07:10.581285 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:07:10.595198 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:07:10.595198 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:07:10.595198 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:07:10.595198 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:07:10.595198 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:07:10.595198 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:07:11.051714 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:07:11.916900 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:07:11.916900 ignition[1011]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:07:11.921431 ignition[1011]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:07:11.924951 ignition[1011]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:07:11.940724 ignition[1011]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:07:11.946219 ignition[1011]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:07:11.949197 ignition[1011]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:07:11.949197 ignition[1011]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:07:11.949197 ignition[1011]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:07:11.949197 ignition[1011]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:07:11.949197 ignition[1011]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:07:11.949197 ignition[1011]: INFO : files: files passed Dec 16 12:07:11.949197 ignition[1011]: INFO : Ignition finished successfully Dec 16 12:07:11.968446 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 12:07:11.968475 kernel: audit: type=1130 audit(1765886831.955:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:11.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:11.953071 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:07:11.956818 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:07:11.973777 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:07:11.978232 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:07:11.980084 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:07:11.989945 kernel: audit: type=1130 audit(1765886831.981:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:11.989972 kernel: audit: type=1131 audit(1765886831.981:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:11.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:11.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:11.991239 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:07:11.995666 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:07:11.997553 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:07:11.997553 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:07:12.001082 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:07:12.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.003262 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:07:12.012907 kernel: audit: type=1130 audit(1765886832.003:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.013179 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:07:12.085542 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:07:12.093237 kernel: audit: type=1130 audit(1765886832.087:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.093263 kernel: audit: type=1131 audit(1765886832.087:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.085653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:07:12.087283 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:07:12.094227 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:07:12.096525 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:07:12.097910 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:07:12.132132 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:07:12.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.134889 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:07:12.140196 kernel: audit: type=1130 audit(1765886832.133:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.154800 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:07:12.154944 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:07:12.157544 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:07:12.159741 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:07:12.161756 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:07:12.161883 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:07:12.175067 kernel: audit: type=1131 audit(1765886832.163:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.164420 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:07:12.176178 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:07:12.177813 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:07:12.179605 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:07:12.181773 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:07:12.183850 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:07:12.185860 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:07:12.187990 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:07:12.190050 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:07:12.193784 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:07:12.196786 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:07:12.198346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:07:12.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.198488 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:07:12.205465 kernel: audit: type=1131 audit(1765886832.199:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.203314 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:07:12.204489 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:07:12.206711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:07:12.210105 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:07:12.211428 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:07:12.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.211560 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:07:12.218535 kernel: audit: type=1131 audit(1765886832.213:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.217498 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:07:12.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.217647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:07:12.219776 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:07:12.221523 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:07:12.225089 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:07:12.227888 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:07:12.231113 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:07:12.232885 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:07:12.232980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:07:12.234674 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:07:12.234756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:07:12.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.236371 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:07:12.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.236444 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:07:12.238238 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:07:12.238358 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:07:12.240224 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:07:12.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.240329 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:07:12.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.243072 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:07:12.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.245710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:07:12.246944 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:07:12.247093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:07:12.249522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:07:12.249644 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:07:12.251456 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:07:12.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.251566 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:07:12.257287 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:07:12.259044 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:07:12.266945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:07:12.270459 ignition[1069]: INFO : Ignition 2.24.0 Dec 16 12:07:12.270459 ignition[1069]: INFO : Stage: umount Dec 16 12:07:12.272301 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:07:12.272301 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:07:12.272301 ignition[1069]: INFO : umount: umount passed Dec 16 12:07:12.272301 ignition[1069]: INFO : Ignition finished successfully Dec 16 12:07:12.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.273324 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:07:12.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.273444 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:07:12.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.276180 systemd[1]: Stopped target network.target - Network. Dec 16 12:07:12.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.277163 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:07:12.277231 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:07:12.279053 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:07:12.279113 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:07:12.281235 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:07:12.281325 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:07:12.283682 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:07:12.283743 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:07:12.286087 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:07:12.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.288073 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:07:12.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.303000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:07:12.295980 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:07:12.296150 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:07:12.299567 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:07:12.299702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:07:12.308000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:07:12.306562 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:07:12.308359 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:07:12.308399 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:07:12.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.311342 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:07:12.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.312503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:07:12.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.312571 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:07:12.314883 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:07:12.314930 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:07:12.317028 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:07:12.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.317077 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:07:12.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.319071 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:07:12.321945 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:07:12.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.325625 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:07:12.327359 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:07:12.327430 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:07:12.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.330509 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:07:12.330653 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:07:12.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.334960 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:07:12.335049 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:07:12.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.336389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:07:12.336428 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:07:12.338496 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:07:12.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.338560 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:07:12.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.341567 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:07:12.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.341632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:07:12.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.344761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:07:12.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.344821 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:07:12.349050 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:07:12.350192 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:07:12.350255 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:07:12.352445 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:07:12.352489 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:07:12.354732 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:07:12.354777 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:07:12.357102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:07:12.357172 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:07:12.359230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:07:12.359286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:07:12.366589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:07:12.377165 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:07:12.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.379748 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:07:12.379851 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:07:12.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:12.382439 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:07:12.384821 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:07:12.408223 systemd[1]: Switching root. Dec 16 12:07:12.450209 systemd-journald[346]: Journal stopped Dec 16 12:07:13.308697 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Dec 16 12:07:13.308756 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:07:13.308775 kernel: SELinux: policy capability open_perms=1 Dec 16 12:07:13.308786 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:07:13.308797 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:07:13.308807 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:07:13.308836 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:07:13.308849 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:07:13.308864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:07:13.308876 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:07:13.308888 systemd[1]: Successfully loaded SELinux policy in 63.113ms. Dec 16 12:07:13.308907 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.115ms. Dec 16 12:07:13.308920 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:07:13.308932 systemd[1]: Detected virtualization kvm. Dec 16 12:07:13.308942 systemd[1]: Detected architecture arm64. Dec 16 12:07:13.308953 systemd[1]: Detected first boot. Dec 16 12:07:13.308965 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:07:13.308978 zram_generator::config[1114]: No configuration found. Dec 16 12:07:13.308998 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:07:13.309009 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:07:13.309038 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:07:13.309055 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:07:13.309067 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:07:13.309079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:07:13.309091 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:07:13.309101 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:07:13.309112 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:07:13.309124 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:07:13.309146 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:07:13.309160 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:07:13.309171 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:07:13.309183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:07:13.309194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:07:13.309205 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:07:13.309216 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:07:13.309228 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:07:13.309240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:07:13.309252 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:07:13.309263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:07:13.309274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:07:13.309285 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:07:13.309297 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:07:13.309308 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:07:13.309319 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:07:13.309330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:07:13.309342 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:07:13.309353 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:07:13.309364 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:07:13.309375 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:07:13.309388 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:07:13.309400 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:07:13.309411 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:07:13.309421 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:07:13.309432 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:07:13.309443 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:07:13.309454 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:07:13.309466 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:07:13.309477 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:07:13.309487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:07:13.309498 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:07:13.309508 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:07:13.309519 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:07:13.309533 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:07:13.309546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:07:13.309557 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:07:13.309568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:07:13.309579 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:07:13.309595 systemd[1]: Reached target machines.target - Containers. Dec 16 12:07:13.309608 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:07:13.309620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:07:13.309631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:07:13.309642 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:07:13.309653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:07:13.309663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:07:13.309675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:07:13.309686 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:07:13.309698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:07:13.309709 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:07:13.309720 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:07:13.309731 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:07:13.309744 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:07:13.309755 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:07:13.309766 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:07:13.309779 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:07:13.309790 kernel: fuse: init (API version 7.41) Dec 16 12:07:13.309800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:07:13.309811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:07:13.309824 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:07:13.309835 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:07:13.309846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:07:13.309856 kernel: ACPI: bus type drm_connector registered Dec 16 12:07:13.309867 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:07:13.309878 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:07:13.309889 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:07:13.309902 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:07:13.309912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:07:13.309927 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:07:13.309938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:07:13.309949 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:07:13.309959 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:07:13.310000 systemd-journald[1176]: Collecting audit messages is enabled. Dec 16 12:07:13.310046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:07:13.310059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:07:13.310072 systemd-journald[1176]: Journal started Dec 16 12:07:13.310096 systemd-journald[1176]: Runtime Journal (/run/log/journal/c653a8e0267d465d971c9bbce86d4c4f) is 6M, max 48.5M, 42.4M free. Dec 16 12:07:13.130000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:07:13.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.252000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:07:13.252000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:07:13.253000 audit: BPF prog-id=15 op=LOAD Dec 16 12:07:13.253000 audit: BPF prog-id=16 op=LOAD Dec 16 12:07:13.253000 audit: BPF prog-id=17 op=LOAD Dec 16 12:07:13.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.307000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:07:13.307000 audit[1176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc2676ee0 a2=4000 a3=0 items=0 ppid=1 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:13.307000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:07:13.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.021493 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:07:13.044269 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:07:13.044729 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:07:13.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.314189 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:07:13.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.315281 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:07:13.315478 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:07:13.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.317893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:07:13.318962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:07:13.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.320748 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:07:13.320919 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:07:13.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.322466 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:07:13.322620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:07:13.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.324081 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:07:13.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.325605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:07:13.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.327363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:07:13.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.329680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:07:13.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.331619 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:07:13.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.337909 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:07:13.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.346961 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:07:13.348670 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:07:13.351069 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:07:13.353180 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:07:13.354435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:07:13.354475 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:07:13.356626 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:07:13.358535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:07:13.358661 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:07:13.363850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:07:13.366099 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:07:13.367378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:07:13.368378 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:07:13.369779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:07:13.373165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:07:13.374344 systemd-journald[1176]: Time spent on flushing to /var/log/journal/c653a8e0267d465d971c9bbce86d4c4f is 15.641ms for 1005 entries. Dec 16 12:07:13.374344 systemd-journald[1176]: System Journal (/var/log/journal/c653a8e0267d465d971c9bbce86d4c4f) is 8M, max 163.5M, 155.5M free. Dec 16 12:07:13.394654 systemd-journald[1176]: Received client request to flush runtime journal. Dec 16 12:07:13.394704 kernel: loop1: detected capacity change from 0 to 211168 Dec 16 12:07:13.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.375212 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:07:13.378889 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:07:13.382133 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:07:13.383566 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:07:13.386391 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:07:13.389824 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:07:13.394735 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:07:13.398647 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:07:13.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.404878 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Dec 16 12:07:13.404893 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Dec 16 12:07:13.410709 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:07:13.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.414127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:07:13.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.417043 kernel: loop2: detected capacity change from 0 to 45344 Dec 16 12:07:13.418536 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:07:13.449061 kernel: loop3: detected capacity change from 0 to 100192 Dec 16 12:07:13.453348 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:07:13.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.457000 audit: BPF prog-id=18 op=LOAD Dec 16 12:07:13.457000 audit: BPF prog-id=19 op=LOAD Dec 16 12:07:13.457000 audit: BPF prog-id=20 op=LOAD Dec 16 12:07:13.458813 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:07:13.460000 audit: BPF prog-id=21 op=LOAD Dec 16 12:07:13.461483 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:07:13.464264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:07:13.468103 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:07:13.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.471000 audit: BPF prog-id=22 op=LOAD Dec 16 12:07:13.471000 audit: BPF prog-id=23 op=LOAD Dec 16 12:07:13.471000 audit: BPF prog-id=24 op=LOAD Dec 16 12:07:13.473160 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:07:13.479000 audit: BPF prog-id=25 op=LOAD Dec 16 12:07:13.479000 audit: BPF prog-id=26 op=LOAD Dec 16 12:07:13.479000 audit: BPF prog-id=27 op=LOAD Dec 16 12:07:13.480312 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:07:13.486045 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 12:07:13.492577 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Dec 16 12:07:13.492609 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Dec 16 12:07:13.497576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:07:13.498998 kernel: loop5: detected capacity change from 0 to 45344 Dec 16 12:07:13.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.505038 kernel: loop6: detected capacity change from 0 to 100192 Dec 16 12:07:13.509053 (sd-merge)[1259]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 12:07:13.510450 systemd-nsresourced[1257]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:07:13.511489 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:07:13.512353 (sd-merge)[1259]: Merged extensions into '/usr'. Dec 16 12:07:13.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.513609 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:07:13.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.524206 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:07:13.524226 systemd[1]: Reloading... Dec 16 12:07:13.575559 systemd-oomd[1252]: No swap; memory pressure usage will be degraded Dec 16 12:07:13.582077 zram_generator::config[1302]: No configuration found. Dec 16 12:07:13.600724 systemd-resolved[1254]: Positive Trust Anchors: Dec 16 12:07:13.600741 systemd-resolved[1254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:07:13.600745 systemd-resolved[1254]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:07:13.600774 systemd-resolved[1254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:07:13.607516 systemd-resolved[1254]: Defaulting to hostname 'linux'. Dec 16 12:07:13.723419 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:07:13.723867 systemd[1]: Reloading finished in 199 ms. Dec 16 12:07:13.750072 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:07:13.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.751624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:07:13.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.753196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:07:13.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.756956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:07:13.766237 systemd[1]: Starting ensure-sysext.service... Dec 16 12:07:13.768038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:07:13.769000 audit: BPF prog-id=28 op=LOAD Dec 16 12:07:13.769000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:07:13.769000 audit: BPF prog-id=29 op=LOAD Dec 16 12:07:13.769000 audit: BPF prog-id=30 op=LOAD Dec 16 12:07:13.769000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:07:13.769000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:07:13.770000 audit: BPF prog-id=31 op=LOAD Dec 16 12:07:13.770000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:07:13.770000 audit: BPF prog-id=32 op=LOAD Dec 16 12:07:13.770000 audit: BPF prog-id=33 op=LOAD Dec 16 12:07:13.770000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:07:13.770000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:07:13.771000 audit: BPF prog-id=34 op=LOAD Dec 16 12:07:13.771000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:07:13.771000 audit: BPF prog-id=35 op=LOAD Dec 16 12:07:13.771000 audit: BPF prog-id=36 op=LOAD Dec 16 12:07:13.771000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:07:13.771000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:07:13.772000 audit: BPF prog-id=37 op=LOAD Dec 16 12:07:13.772000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:07:13.772000 audit: BPF prog-id=38 op=LOAD Dec 16 12:07:13.772000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:07:13.772000 audit: BPF prog-id=39 op=LOAD Dec 16 12:07:13.773000 audit: BPF prog-id=40 op=LOAD Dec 16 12:07:13.773000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:07:13.773000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:07:13.780169 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:07:13.780185 systemd[1]: Reloading... Dec 16 12:07:13.784899 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:07:13.785277 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:07:13.785915 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:07:13.786997 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Dec 16 12:07:13.787171 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Dec 16 12:07:13.791247 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:07:13.791376 systemd-tmpfiles[1337]: Skipping /boot Dec 16 12:07:13.801683 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:07:13.801797 systemd-tmpfiles[1337]: Skipping /boot Dec 16 12:07:13.831052 zram_generator::config[1372]: No configuration found. Dec 16 12:07:13.968109 systemd[1]: Reloading finished in 187 ms. Dec 16 12:07:13.991743 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:07:13.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:13.994000 audit: BPF prog-id=41 op=LOAD Dec 16 12:07:13.994000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:07:13.994000 audit: BPF prog-id=42 op=LOAD Dec 16 12:07:13.994000 audit: BPF prog-id=43 op=LOAD Dec 16 12:07:13.994000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:07:13.994000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:07:13.995000 audit: BPF prog-id=44 op=LOAD Dec 16 12:07:13.995000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:07:13.995000 audit: BPF prog-id=45 op=LOAD Dec 16 12:07:13.995000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:07:13.995000 audit: BPF prog-id=46 op=LOAD Dec 16 12:07:13.995000 audit: BPF prog-id=47 op=LOAD Dec 16 12:07:13.995000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:07:13.995000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:07:13.997000 audit: BPF prog-id=48 op=LOAD Dec 16 12:07:13.997000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:07:13.997000 audit: BPF prog-id=49 op=LOAD Dec 16 12:07:13.997000 audit: BPF prog-id=50 op=LOAD Dec 16 12:07:13.997000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:07:13.997000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:07:13.997000 audit: BPF prog-id=51 op=LOAD Dec 16 12:07:13.997000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:07:13.998000 audit: BPF prog-id=52 op=LOAD Dec 16 12:07:13.998000 audit: BPF prog-id=53 op=LOAD Dec 16 12:07:13.998000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:07:13.998000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:07:14.018127 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:07:14.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.025942 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:07:14.028431 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:07:14.040742 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:07:14.043130 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:07:14.045000 audit: BPF prog-id=54 op=LOAD Dec 16 12:07:14.045000 audit: BPF prog-id=55 op=LOAD Dec 16 12:07:14.045000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:07:14.045000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:07:14.048085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:07:14.051382 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:07:14.058611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:07:14.060414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:07:14.064480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:07:14.074609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:07:14.077173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:07:14.077370 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:07:14.077458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:07:14.078666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:07:14.078885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:07:14.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.083000 audit[1416]: SYSTEM_BOOT pid=1416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.086144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:07:14.088062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:07:14.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.091444 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:07:14.091641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:07:14.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.095172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:07:14.096942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:07:14.100243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:07:14.100423 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:07:14.100515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:07:14.100674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:07:14.101479 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:07:14.102945 systemd-udevd[1410]: Using default interface naming scheme 'v257'. Dec 16 12:07:14.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.105047 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:07:14.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.113684 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:07:14.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:14.118000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:07:14.118000 audit[1438]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd2e60010 a2=420 a3=0 items=0 ppid=1405 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:14.118000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:07:14.118979 augenrules[1438]: No rules Dec 16 12:07:14.121468 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:07:14.123639 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:07:14.125325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:07:14.128268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:07:14.128486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:07:14.134614 systemd[1]: Finished ensure-sysext.service. Dec 16 12:07:14.140380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:07:14.141550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:07:14.144329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:07:14.146833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:07:14.150231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:07:14.150343 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:07:14.150389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:07:14.159335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:07:14.162183 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:07:14.163498 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:07:14.163970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:07:14.164188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:07:14.187892 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:07:14.188170 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:07:14.190457 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:07:14.191167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:07:14.195816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:07:14.195873 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:07:14.226875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:07:14.230984 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:07:14.261811 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:07:14.264485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:07:14.265859 systemd-networkd[1471]: lo: Link UP Dec 16 12:07:14.266044 systemd-networkd[1471]: lo: Gained carrier Dec 16 12:07:14.271969 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:07:14.274363 systemd-networkd[1471]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:07:14.274373 systemd-networkd[1471]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:07:14.275353 systemd-networkd[1471]: eth0: Link UP Dec 16 12:07:14.275496 systemd[1]: Reached target network.target - Network. Dec 16 12:07:14.275785 systemd-networkd[1471]: eth0: Gained carrier Dec 16 12:07:14.275801 systemd-networkd[1471]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:07:14.277918 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:07:14.281007 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:07:14.292082 systemd-networkd[1471]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:07:14.300065 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:07:14.301853 systemd-timesyncd[1472]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:07:14.301903 systemd-timesyncd[1472]: Initial clock synchronization to Tue 2025-12-16 12:07:14.346542 UTC. Dec 16 12:07:14.302294 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:07:14.304608 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:07:14.359076 ldconfig[1407]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:07:14.368295 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:07:14.375775 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:07:14.391089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:07:14.399080 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:07:14.424196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:07:14.426982 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:07:14.428309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:07:14.429658 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:07:14.431196 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:07:14.432451 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:07:14.433996 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:07:14.435484 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:07:14.436580 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:07:14.437976 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:07:14.438024 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:07:14.439029 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:07:14.440765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:07:14.443502 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:07:14.446647 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:07:14.448311 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:07:14.449805 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:07:14.456059 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:07:14.457667 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:07:14.459763 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:07:14.461117 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:07:14.462192 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:07:14.463297 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:07:14.463329 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:07:14.464522 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:07:14.466778 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:07:14.468918 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:07:14.471332 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:07:14.473467 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:07:14.474681 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:07:14.475756 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:07:14.479100 jq[1521]: false Dec 16 12:07:14.480152 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:07:14.482277 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:07:14.487106 extend-filesystems[1522]: Found /dev/vda6 Dec 16 12:07:14.485371 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:07:14.489184 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:07:14.490364 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:07:14.490457 extend-filesystems[1522]: Found /dev/vda9 Dec 16 12:07:14.490826 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:07:14.492752 extend-filesystems[1522]: Checking size of /dev/vda9 Dec 16 12:07:14.499416 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:07:14.500666 extend-filesystems[1522]: Resized partition /dev/vda9 Dec 16 12:07:14.504324 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:07:14.503397 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:07:14.508869 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:07:14.513067 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 12:07:14.512475 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:07:14.513076 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:07:14.513376 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:07:14.513578 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:07:14.525032 jq[1544]: true Dec 16 12:07:14.525401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:07:14.525666 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:07:14.540041 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 12:07:14.547553 jq[1560]: true Dec 16 12:07:14.554837 update_engine[1534]: I20251216 12:07:14.545729 1534 main.cc:92] Flatcar Update Engine starting Dec 16 12:07:14.557034 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:07:14.557034 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:07:14.557034 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 12:07:14.567539 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Dec 16 12:07:14.558414 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:07:14.570007 tar[1551]: linux-arm64/LICENSE Dec 16 12:07:14.570007 tar[1551]: linux-arm64/helm Dec 16 12:07:14.559299 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:07:14.576725 dbus-daemon[1519]: [system] SELinux support is enabled Dec 16 12:07:14.576975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:07:14.580058 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:07:14.580089 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:07:14.582187 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:07:14.582213 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:07:14.582314 update_engine[1534]: I20251216 12:07:14.582252 1534 update_check_scheduler.cc:74] Next update check in 7m15s Dec 16 12:07:14.585383 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:07:14.588305 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:07:14.620795 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:07:14.621564 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:07:14.623608 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:07:14.629995 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:07:14.630455 systemd-logind[1531]: New seat seat0. Dec 16 12:07:14.634296 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:07:14.644615 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:07:14.698192 containerd[1557]: time="2025-12-16T12:07:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:07:14.700519 containerd[1557]: time="2025-12-16T12:07:14.700479000Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712029840Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.76µs" Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712071320Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712112840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712124360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712263920Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712279880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712327760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712337880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712828080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712853160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712870400Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713031 containerd[1557]: time="2025-12-16T12:07:14.712881240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713330 containerd[1557]: time="2025-12-16T12:07:14.713288040Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.713356 containerd[1557]: time="2025-12-16T12:07:14.713330720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:07:14.714089 containerd[1557]: time="2025-12-16T12:07:14.714049520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.714392 containerd[1557]: time="2025-12-16T12:07:14.714363440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.714424 containerd[1557]: time="2025-12-16T12:07:14.714412680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:07:14.714460 containerd[1557]: time="2025-12-16T12:07:14.714428600Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:07:14.714479 containerd[1557]: time="2025-12-16T12:07:14.714465600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:07:14.716027 containerd[1557]: time="2025-12-16T12:07:14.714743400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:07:14.716027 containerd[1557]: time="2025-12-16T12:07:14.714845400Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:07:14.718162 containerd[1557]: time="2025-12-16T12:07:14.718125440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:07:14.718215 containerd[1557]: time="2025-12-16T12:07:14.718187200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:07:14.718338 containerd[1557]: time="2025-12-16T12:07:14.718317240Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:07:14.718338 containerd[1557]: time="2025-12-16T12:07:14.718334680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:07:14.718381 containerd[1557]: time="2025-12-16T12:07:14.718349440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:07:14.718381 containerd[1557]: time="2025-12-16T12:07:14.718361560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:07:14.718418 containerd[1557]: time="2025-12-16T12:07:14.718380160Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:07:14.718418 containerd[1557]: time="2025-12-16T12:07:14.718391080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:07:14.718418 containerd[1557]: time="2025-12-16T12:07:14.718402600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:07:14.718418 containerd[1557]: time="2025-12-16T12:07:14.718414360Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:07:14.718483 containerd[1557]: time="2025-12-16T12:07:14.718425680Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:07:14.718483 containerd[1557]: time="2025-12-16T12:07:14.718436720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:07:14.718483 containerd[1557]: time="2025-12-16T12:07:14.718447400Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:07:14.718483 containerd[1557]: time="2025-12-16T12:07:14.718459720Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:07:14.718599 containerd[1557]: time="2025-12-16T12:07:14.718569920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:07:14.718624 containerd[1557]: time="2025-12-16T12:07:14.718608760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:07:14.718642 containerd[1557]: time="2025-12-16T12:07:14.718626080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:07:14.718642 containerd[1557]: time="2025-12-16T12:07:14.718637360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:07:14.718684 containerd[1557]: time="2025-12-16T12:07:14.718647640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:07:14.718684 containerd[1557]: time="2025-12-16T12:07:14.718663520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:07:14.718684 containerd[1557]: time="2025-12-16T12:07:14.718679200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:07:14.718731 containerd[1557]: time="2025-12-16T12:07:14.718689640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:07:14.718731 containerd[1557]: time="2025-12-16T12:07:14.718699800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:07:14.718731 containerd[1557]: time="2025-12-16T12:07:14.718709840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:07:14.718731 containerd[1557]: time="2025-12-16T12:07:14.718719080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:07:14.718801 containerd[1557]: time="2025-12-16T12:07:14.718744000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:07:14.718801 containerd[1557]: time="2025-12-16T12:07:14.718782320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:07:14.718801 containerd[1557]: time="2025-12-16T12:07:14.718795920Z" level=info msg="Start snapshots syncer" Dec 16 12:07:14.718856 containerd[1557]: time="2025-12-16T12:07:14.718835720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:07:14.719302 containerd[1557]: time="2025-12-16T12:07:14.719257920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:07:14.719404 containerd[1557]: time="2025-12-16T12:07:14.719316680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:07:14.719404 containerd[1557]: time="2025-12-16T12:07:14.719371840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:07:14.719544 containerd[1557]: time="2025-12-16T12:07:14.719520680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:07:14.719577 containerd[1557]: time="2025-12-16T12:07:14.719549480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:07:14.719577 containerd[1557]: time="2025-12-16T12:07:14.719572960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:07:14.719624 containerd[1557]: time="2025-12-16T12:07:14.719595040Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:07:14.719624 containerd[1557]: time="2025-12-16T12:07:14.719610400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:07:14.719624 containerd[1557]: time="2025-12-16T12:07:14.719621160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:07:14.719681 containerd[1557]: time="2025-12-16T12:07:14.719631040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:07:14.719681 containerd[1557]: time="2025-12-16T12:07:14.719640960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:07:14.719681 containerd[1557]: time="2025-12-16T12:07:14.719652640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:07:14.719728 containerd[1557]: time="2025-12-16T12:07:14.719700840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:07:14.719728 containerd[1557]: time="2025-12-16T12:07:14.719714640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:07:14.719728 containerd[1557]: time="2025-12-16T12:07:14.719722640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:07:14.719778 containerd[1557]: time="2025-12-16T12:07:14.719732360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:07:14.719878 containerd[1557]: time="2025-12-16T12:07:14.719740640Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:07:14.719906 containerd[1557]: time="2025-12-16T12:07:14.719880840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:07:14.719906 containerd[1557]: time="2025-12-16T12:07:14.719895800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:07:14.719986 containerd[1557]: time="2025-12-16T12:07:14.719972640Z" level=info msg="runtime interface created" Dec 16 12:07:14.719986 containerd[1557]: time="2025-12-16T12:07:14.719981200Z" level=info msg="created NRI interface" Dec 16 12:07:14.720046 containerd[1557]: time="2025-12-16T12:07:14.719989800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:07:14.720046 containerd[1557]: time="2025-12-16T12:07:14.720001680Z" level=info msg="Connect containerd service" Dec 16 12:07:14.720046 containerd[1557]: time="2025-12-16T12:07:14.720038840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:07:14.720931 containerd[1557]: time="2025-12-16T12:07:14.720892160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:07:14.791821 containerd[1557]: time="2025-12-16T12:07:14.791697360Z" level=info msg="Start subscribing containerd event" Dec 16 12:07:14.791821 containerd[1557]: time="2025-12-16T12:07:14.791788000Z" level=info msg="Start recovering state" Dec 16 12:07:14.791927 containerd[1557]: time="2025-12-16T12:07:14.791878400Z" level=info msg="Start event monitor" Dec 16 12:07:14.791927 containerd[1557]: time="2025-12-16T12:07:14.791892000Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:07:14.791927 containerd[1557]: time="2025-12-16T12:07:14.791899560Z" level=info msg="Start streaming server" Dec 16 12:07:14.791927 containerd[1557]: time="2025-12-16T12:07:14.791907760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:07:14.791927 containerd[1557]: time="2025-12-16T12:07:14.791914840Z" level=info msg="runtime interface starting up..." Dec 16 12:07:14.791927 containerd[1557]: time="2025-12-16T12:07:14.791920720Z" level=info msg="starting plugins..." Dec 16 12:07:14.792050 containerd[1557]: time="2025-12-16T12:07:14.791935560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:07:14.792050 containerd[1557]: time="2025-12-16T12:07:14.792005320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:07:14.792270 containerd[1557]: time="2025-12-16T12:07:14.792118840Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:07:14.792390 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:07:14.794304 containerd[1557]: time="2025-12-16T12:07:14.794156560Z" level=info msg="containerd successfully booted in 0.096338s" Dec 16 12:07:14.815656 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:07:14.835819 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:07:14.838766 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:07:14.862525 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:07:14.864073 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:07:14.867010 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:07:14.882332 tar[1551]: linux-arm64/README.md Dec 16 12:07:14.888062 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:07:14.891452 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:07:14.893813 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:07:14.895373 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:07:14.897142 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:07:16.147253 systemd-networkd[1471]: eth0: Gained IPv6LL Dec 16 12:07:16.149644 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:07:16.151502 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:07:16.155455 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:07:16.158348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:07:16.167545 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:07:16.188160 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:07:16.190102 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:07:16.190404 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:07:16.192712 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:07:16.762834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:16.764583 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:07:16.766799 systemd[1]: Startup finished in 1.481s (kernel) + 5.606s (initrd) + 4.186s (userspace) = 11.274s. Dec 16 12:07:16.767041 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:07:17.112054 kubelet[1657]: E1216 12:07:17.111694 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:07:17.114088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:07:17.114202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:07:17.114497 systemd[1]: kubelet.service: Consumed 761ms CPU time, 259.1M memory peak. Dec 16 12:07:18.759779 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:07:18.760918 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:58666.service - OpenSSH per-connection server daemon (10.0.0.1:58666). Dec 16 12:07:18.843918 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 58666 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:18.846106 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:18.852439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:07:18.853342 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:07:18.857752 systemd-logind[1531]: New session 1 of user core. Dec 16 12:07:18.887263 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:07:18.889826 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:07:18.915349 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:18.917992 systemd-logind[1531]: New session 2 of user core. Dec 16 12:07:19.023204 systemd[1676]: Queued start job for default target default.target. Dec 16 12:07:19.041153 systemd[1676]: Created slice app.slice - User Application Slice. Dec 16 12:07:19.041206 systemd[1676]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:07:19.041219 systemd[1676]: Reached target paths.target - Paths. Dec 16 12:07:19.041275 systemd[1676]: Reached target timers.target - Timers. Dec 16 12:07:19.042561 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:07:19.043384 systemd[1676]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:07:19.053932 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:07:19.054254 systemd[1676]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:07:19.054388 systemd[1676]: Reached target sockets.target - Sockets. Dec 16 12:07:19.054431 systemd[1676]: Reached target basic.target - Basic System. Dec 16 12:07:19.054460 systemd[1676]: Reached target default.target - Main User Target. Dec 16 12:07:19.054486 systemd[1676]: Startup finished in 131ms. Dec 16 12:07:19.054917 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:07:19.056413 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:07:19.067935 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). Dec 16 12:07:19.120372 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:19.121792 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:19.126667 systemd-logind[1531]: New session 3 of user core. Dec 16 12:07:19.137256 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:07:19.150208 sshd[1694]: Connection closed by 10.0.0.1 port 58670 Dec 16 12:07:19.149662 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Dec 16 12:07:19.169234 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:58670.service: Deactivated successfully. Dec 16 12:07:19.173062 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:07:19.173839 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:07:19.176350 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:58680.service - OpenSSH per-connection server daemon (10.0.0.1:58680). Dec 16 12:07:19.176975 systemd-logind[1531]: Removed session 3. Dec 16 12:07:19.236258 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 58680 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:19.237745 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:19.242074 systemd-logind[1531]: New session 4 of user core. Dec 16 12:07:19.253235 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:07:19.261165 sshd[1704]: Connection closed by 10.0.0.1 port 58680 Dec 16 12:07:19.261655 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Dec 16 12:07:19.266558 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:07:19.266747 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:58680.service: Deactivated successfully. Dec 16 12:07:19.269480 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:07:19.271663 systemd-logind[1531]: Removed session 4. Dec 16 12:07:19.272963 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:58688.service - OpenSSH per-connection server daemon (10.0.0.1:58688). Dec 16 12:07:19.332107 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 58688 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:19.333495 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:19.338086 systemd-logind[1531]: New session 5 of user core. Dec 16 12:07:19.345234 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:07:19.356551 sshd[1714]: Connection closed by 10.0.0.1 port 58688 Dec 16 12:07:19.357152 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Dec 16 12:07:19.370819 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:58688.service: Deactivated successfully. Dec 16 12:07:19.372402 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:07:19.373131 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:07:19.375310 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:58694.service - OpenSSH per-connection server daemon (10.0.0.1:58694). Dec 16 12:07:19.375910 systemd-logind[1531]: Removed session 5. Dec 16 12:07:19.439404 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 58694 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:19.440794 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:19.445031 systemd-logind[1531]: New session 6 of user core. Dec 16 12:07:19.460254 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:07:19.479319 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:07:19.479617 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:07:19.497114 sudo[1725]: pam_unix(sudo:session): session closed for user root Dec 16 12:07:19.499070 sshd[1724]: Connection closed by 10.0.0.1 port 58694 Dec 16 12:07:19.499193 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Dec 16 12:07:19.515370 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:58694.service: Deactivated successfully. Dec 16 12:07:19.517149 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:07:19.517937 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:07:19.522343 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:58696.service - OpenSSH per-connection server daemon (10.0.0.1:58696). Dec 16 12:07:19.523281 systemd-logind[1531]: Removed session 6. Dec 16 12:07:19.583361 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 58696 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:19.585547 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:19.589773 systemd-logind[1531]: New session 7 of user core. Dec 16 12:07:19.603253 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:07:19.616763 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:07:19.617083 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:07:19.665107 sudo[1738]: pam_unix(sudo:session): session closed for user root Dec 16 12:07:19.672109 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:07:19.672412 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:07:19.680593 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:07:19.716000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:07:19.718086 augenrules[1762]: No rules Dec 16 12:07:19.718628 kernel: kauditd_printk_skb: 174 callbacks suppressed Dec 16 12:07:19.718663 kernel: audit: type=1305 audit(1765886839.716:217): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:07:19.720947 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:07:19.721202 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:07:19.716000 audit[1762]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffead76360 a2=420 a3=0 items=0 ppid=1743 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:19.722887 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 16 12:07:19.725549 kernel: audit: type=1300 audit(1765886839.716:217): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffead76360 a2=420 a3=0 items=0 ppid=1743 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:19.725585 kernel: audit: type=1327 audit(1765886839.716:217): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:07:19.716000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:07:19.726732 sshd[1736]: Connection closed by 10.0.0.1 port 58696 Dec 16 12:07:19.726650 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Dec 16 12:07:19.728631 kernel: audit: type=1130 audit(1765886839.720:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.733846 kernel: audit: type=1131 audit(1765886839.720:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.733899 kernel: audit: type=1106 audit(1765886839.721:220): pid=1737 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.721000 audit[1737]: USER_END pid=1737 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.721000 audit[1737]: CRED_DISP pid=1737 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.740090 kernel: audit: type=1104 audit(1765886839.721:221): pid=1737 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.740141 kernel: audit: type=1106 audit(1765886839.726:222): pid=1732 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.726000 audit[1732]: USER_END pid=1732 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.726000 audit[1732]: CRED_DISP pid=1732 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.747891 kernel: audit: type=1104 audit(1765886839.726:223): pid=1732 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.752113 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:58696.service: Deactivated successfully. Dec 16 12:07:19.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.754453 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:07:19.756036 kernel: audit: type=1131 audit(1765886839.751:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.756643 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:07:19.759224 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:58698.service - OpenSSH per-connection server daemon (10.0.0.1:58698). Dec 16 12:07:19.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:58698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.759937 systemd-logind[1531]: Removed session 7. Dec 16 12:07:19.823000 audit[1771]: USER_ACCT pid=1771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.824813 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 58698 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:07:19.824000 audit[1771]: CRED_ACQ pid=1771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.824000 audit[1771]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4155240 a2=3 a3=0 items=0 ppid=1 pid=1771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:19.824000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:07:19.826221 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:07:19.830667 systemd-logind[1531]: New session 8 of user core. Dec 16 12:07:19.837239 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:07:19.838000 audit[1771]: USER_START pid=1771 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.840000 audit[1775]: CRED_ACQ pid=1775 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:19.848000 audit[1776]: USER_ACCT pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.849673 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:07:19.848000 audit[1776]: CRED_REFR pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:19.849953 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:07:19.848000 audit[1776]: USER_START pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:20.119253 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:07:20.135390 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:07:20.329360 dockerd[1797]: time="2025-12-16T12:07:20.329302100Z" level=info msg="Starting up" Dec 16 12:07:20.330628 dockerd[1797]: time="2025-12-16T12:07:20.330450983Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:07:20.340125 dockerd[1797]: time="2025-12-16T12:07:20.340085904Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:07:20.538201 dockerd[1797]: time="2025-12-16T12:07:20.538087567Z" level=info msg="Loading containers: start." Dec 16 12:07:20.547062 kernel: Initializing XFRM netlink socket Dec 16 12:07:20.585000 audit[1850]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.585000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe1d693e0 a2=0 a3=0 items=0 ppid=1797 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:07:20.587000 audit[1852]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.587000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff30121c0 a2=0 a3=0 items=0 ppid=1797 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:07:20.589000 audit[1854]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.589000 audit[1854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8dd26d0 a2=0 a3=0 items=0 ppid=1797 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:07:20.590000 audit[1856]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.590000 audit[1856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec880b40 a2=0 a3=0 items=0 ppid=1797 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:07:20.592000 audit[1858]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.592000 audit[1858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5bc7500 a2=0 a3=0 items=0 ppid=1797 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:07:20.594000 audit[1860]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.594000 audit[1860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd38d90f0 a2=0 a3=0 items=0 ppid=1797 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:07:20.596000 audit[1862]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.596000 audit[1862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd50ac180 a2=0 a3=0 items=0 ppid=1797 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:07:20.598000 audit[1864]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.598000 audit[1864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffeb281fe0 a2=0 a3=0 items=0 ppid=1797 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:07:20.632000 audit[1867]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.632000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffffff84710 a2=0 a3=0 items=0 ppid=1797 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:07:20.634000 audit[1869]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.634000 audit[1869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeef31ca0 a2=0 a3=0 items=0 ppid=1797 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:07:20.636000 audit[1871]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.636000 audit[1871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd03e3a00 a2=0 a3=0 items=0 ppid=1797 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:07:20.637000 audit[1873]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.637000 audit[1873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff4065130 a2=0 a3=0 items=0 ppid=1797 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:07:20.639000 audit[1875]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.639000 audit[1875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc1f43ad0 a2=0 a3=0 items=0 ppid=1797 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:07:20.671000 audit[1905]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.671000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe8831a70 a2=0 a3=0 items=0 ppid=1797 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:07:20.673000 audit[1907]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.673000 audit[1907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffdb83870 a2=0 a3=0 items=0 ppid=1797 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:07:20.675000 audit[1909]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.675000 audit[1909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde049270 a2=0 a3=0 items=0 ppid=1797 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:07:20.676000 audit[1911]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.676000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbae9a40 a2=0 a3=0 items=0 ppid=1797 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:07:20.678000 audit[1913]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.678000 audit[1913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeac657f0 a2=0 a3=0 items=0 ppid=1797 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:07:20.680000 audit[1915]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.680000 audit[1915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcac9f2c0 a2=0 a3=0 items=0 ppid=1797 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:07:20.682000 audit[1917]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.682000 audit[1917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff700bf50 a2=0 a3=0 items=0 ppid=1797 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:07:20.684000 audit[1919]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.684000 audit[1919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcdf273c0 a2=0 a3=0 items=0 ppid=1797 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:07:20.686000 audit[1921]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.686000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd3825290 a2=0 a3=0 items=0 ppid=1797 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.686000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:07:20.688000 audit[1923]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.688000 audit[1923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffecc303f0 a2=0 a3=0 items=0 ppid=1797 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.688000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:07:20.690000 audit[1925]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.690000 audit[1925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdd39e880 a2=0 a3=0 items=0 ppid=1797 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.690000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:07:20.691000 audit[1927]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.691000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffffcd42e00 a2=0 a3=0 items=0 ppid=1797 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:07:20.693000 audit[1929]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.693000 audit[1929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc1cf3b40 a2=0 a3=0 items=0 ppid=1797 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:07:20.699000 audit[1934]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.699000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc82271d0 a2=0 a3=0 items=0 ppid=1797 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:07:20.701000 audit[1936]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.701000 audit[1936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcc8b4a10 a2=0 a3=0 items=0 ppid=1797 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:07:20.703000 audit[1938]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.703000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd93f23b0 a2=0 a3=0 items=0 ppid=1797 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:07:20.705000 audit[1940]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.705000 audit[1940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff3c449b0 a2=0 a3=0 items=0 ppid=1797 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:07:20.707000 audit[1942]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.707000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffefed8b30 a2=0 a3=0 items=0 ppid=1797 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:07:20.709000 audit[1944]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:20.709000 audit[1944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce5df140 a2=0 a3=0 items=0 ppid=1797 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:07:20.786000 audit[1949]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.786000 audit[1949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffb809a90 a2=0 a3=0 items=0 ppid=1797 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:07:20.788000 audit[1951]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.788000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd1e42830 a2=0 a3=0 items=0 ppid=1797 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.788000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:07:20.796000 audit[1959]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.796000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd8867170 a2=0 a3=0 items=0 ppid=1797 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:07:20.804000 audit[1965]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.804000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffbd20bd0 a2=0 a3=0 items=0 ppid=1797 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:07:20.806000 audit[1967]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.806000 audit[1967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffde6f3040 a2=0 a3=0 items=0 ppid=1797 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:07:20.808000 audit[1969]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.808000 audit[1969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffebca5500 a2=0 a3=0 items=0 ppid=1797 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:07:20.810000 audit[1971]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.810000 audit[1971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd4aaa8d0 a2=0 a3=0 items=0 ppid=1797 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:07:20.812000 audit[1973]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:20.812000 audit[1973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc42d3d10 a2=0 a3=0 items=0 ppid=1797 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:20.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:07:20.814013 systemd-networkd[1471]: docker0: Link UP Dec 16 12:07:20.818122 dockerd[1797]: time="2025-12-16T12:07:20.818078847Z" level=info msg="Loading containers: done." Dec 16 12:07:20.830431 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck302734624-merged.mount: Deactivated successfully. Dec 16 12:07:20.838457 dockerd[1797]: time="2025-12-16T12:07:20.838051269Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:07:20.838457 dockerd[1797]: time="2025-12-16T12:07:20.838143215Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:07:20.838457 dockerd[1797]: time="2025-12-16T12:07:20.838296245Z" level=info msg="Initializing buildkit" Dec 16 12:07:20.859590 dockerd[1797]: time="2025-12-16T12:07:20.859541521Z" level=info msg="Completed buildkit initialization" Dec 16 12:07:20.866623 dockerd[1797]: time="2025-12-16T12:07:20.866575022Z" level=info msg="Daemon has completed initialization" Dec 16 12:07:20.866892 dockerd[1797]: time="2025-12-16T12:07:20.866783884Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:07:20.866891 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:07:20.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:21.456756 containerd[1557]: time="2025-12-16T12:07:21.456586115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:07:22.089999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078750296.mount: Deactivated successfully. Dec 16 12:07:22.987838 containerd[1557]: time="2025-12-16T12:07:22.987794919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:22.996506 containerd[1557]: time="2025-12-16T12:07:22.996441569Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Dec 16 12:07:23.002243 containerd[1557]: time="2025-12-16T12:07:23.002177998Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:23.005830 containerd[1557]: time="2025-12-16T12:07:23.005768141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:23.007953 containerd[1557]: time="2025-12-16T12:07:23.007789189Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.551164724s" Dec 16 12:07:23.007953 containerd[1557]: time="2025-12-16T12:07:23.007853897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:07:23.009189 containerd[1557]: time="2025-12-16T12:07:23.009156147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:07:24.200312 containerd[1557]: time="2025-12-16T12:07:24.200264369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:24.201242 containerd[1557]: time="2025-12-16T12:07:24.201029765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 16 12:07:24.202061 containerd[1557]: time="2025-12-16T12:07:24.202035257Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:24.204559 containerd[1557]: time="2025-12-16T12:07:24.204521622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:24.205701 containerd[1557]: time="2025-12-16T12:07:24.205675144Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.196486263s" Dec 16 12:07:24.205782 containerd[1557]: time="2025-12-16T12:07:24.205703228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:07:24.206152 containerd[1557]: time="2025-12-16T12:07:24.206129695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:07:25.525979 containerd[1557]: time="2025-12-16T12:07:25.525835957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:25.526498 containerd[1557]: time="2025-12-16T12:07:25.526430909Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 16 12:07:25.527472 containerd[1557]: time="2025-12-16T12:07:25.527436622Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:25.530024 containerd[1557]: time="2025-12-16T12:07:25.529983233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:25.530875 containerd[1557]: time="2025-12-16T12:07:25.530830114Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.324666848s" Dec 16 12:07:25.530875 containerd[1557]: time="2025-12-16T12:07:25.530864204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:07:25.531313 containerd[1557]: time="2025-12-16T12:07:25.531279973Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:07:26.546326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288163805.mount: Deactivated successfully. Dec 16 12:07:27.164159 containerd[1557]: time="2025-12-16T12:07:27.164038687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:27.174933 containerd[1557]: time="2025-12-16T12:07:27.174863629Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Dec 16 12:07:27.185424 containerd[1557]: time="2025-12-16T12:07:27.185354340Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:27.201535 containerd[1557]: time="2025-12-16T12:07:27.201468693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:27.202081 containerd[1557]: time="2025-12-16T12:07:27.202048440Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.670732616s" Dec 16 12:07:27.202202 containerd[1557]: time="2025-12-16T12:07:27.202185897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:07:27.202857 containerd[1557]: time="2025-12-16T12:07:27.202602474Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:07:27.364699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:07:27.366139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:07:27.502920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:27.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:27.506583 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:07:27.506644 kernel: audit: type=1130 audit(1765886847.502:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:27.507577 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:07:27.637629 kubelet[2100]: E1216 12:07:27.637542 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:07:27.640586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:07:27.640713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:07:27.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:07:27.641341 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.8M memory peak. Dec 16 12:07:27.645035 kernel: audit: type=1131 audit(1765886847.640:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:07:27.894503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727319429.mount: Deactivated successfully. Dec 16 12:07:28.477011 containerd[1557]: time="2025-12-16T12:07:28.476965300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:28.477850 containerd[1557]: time="2025-12-16T12:07:28.477472512Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18793643" Dec 16 12:07:28.478600 containerd[1557]: time="2025-12-16T12:07:28.478565552Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:28.482039 containerd[1557]: time="2025-12-16T12:07:28.481452238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:28.482758 containerd[1557]: time="2025-12-16T12:07:28.482725455Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.280088819s" Dec 16 12:07:28.482758 containerd[1557]: time="2025-12-16T12:07:28.482758295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:07:28.483267 containerd[1557]: time="2025-12-16T12:07:28.483243441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:07:28.932667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458402654.mount: Deactivated successfully. Dec 16 12:07:28.938496 containerd[1557]: time="2025-12-16T12:07:28.938136926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:07:28.938675 containerd[1557]: time="2025-12-16T12:07:28.938629881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:07:28.939544 containerd[1557]: time="2025-12-16T12:07:28.939509383Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:07:28.942109 containerd[1557]: time="2025-12-16T12:07:28.942082410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:07:28.942783 containerd[1557]: time="2025-12-16T12:07:28.942748254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.474657ms" Dec 16 12:07:28.942783 containerd[1557]: time="2025-12-16T12:07:28.942776328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:07:28.943322 containerd[1557]: time="2025-12-16T12:07:28.943279255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:07:29.519226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744768472.mount: Deactivated successfully. Dec 16 12:07:31.551041 containerd[1557]: time="2025-12-16T12:07:31.550249629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:31.551399 containerd[1557]: time="2025-12-16T12:07:31.551113048Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=68134789" Dec 16 12:07:31.552074 containerd[1557]: time="2025-12-16T12:07:31.552042013Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:31.555420 containerd[1557]: time="2025-12-16T12:07:31.555383458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:31.557134 containerd[1557]: time="2025-12-16T12:07:31.557098125Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.61378427s" Dec 16 12:07:31.557175 containerd[1557]: time="2025-12-16T12:07:31.557132719Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:07:36.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:36.510804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:36.510955 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.8M memory peak. Dec 16 12:07:36.512920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:07:36.520032 kernel: audit: type=1130 audit(1765886856.509:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:36.520133 kernel: audit: type=1131 audit(1765886856.509:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:36.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:36.538963 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-8.scope)... Dec 16 12:07:36.538978 systemd[1]: Reloading... Dec 16 12:07:36.610133 zram_generator::config[2295]: No configuration found. Dec 16 12:07:36.770800 systemd[1]: Reloading finished in 231 ms. Dec 16 12:07:36.803912 kernel: audit: type=1334 audit(1765886856.795:279): prog-id=61 op=LOAD Dec 16 12:07:36.804057 kernel: audit: type=1334 audit(1765886856.795:280): prog-id=41 op=UNLOAD Dec 16 12:07:36.804085 kernel: audit: type=1334 audit(1765886856.795:281): prog-id=62 op=LOAD Dec 16 12:07:36.804110 kernel: audit: type=1334 audit(1765886856.795:282): prog-id=63 op=LOAD Dec 16 12:07:36.804139 kernel: audit: type=1334 audit(1765886856.795:283): prog-id=42 op=UNLOAD Dec 16 12:07:36.795000 audit: BPF prog-id=61 op=LOAD Dec 16 12:07:36.795000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:07:36.795000 audit: BPF prog-id=62 op=LOAD Dec 16 12:07:36.795000 audit: BPF prog-id=63 op=LOAD Dec 16 12:07:36.795000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:07:36.795000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:07:36.805312 kernel: audit: type=1334 audit(1765886856.795:284): prog-id=43 op=UNLOAD Dec 16 12:07:36.796000 audit: BPF prog-id=64 op=LOAD Dec 16 12:07:36.806393 kernel: audit: type=1334 audit(1765886856.796:285): prog-id=64 op=LOAD Dec 16 12:07:36.796000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:07:36.807442 kernel: audit: type=1334 audit(1765886856.796:286): prog-id=57 op=UNLOAD Dec 16 12:07:36.797000 audit: BPF prog-id=65 op=LOAD Dec 16 12:07:36.797000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:07:36.799000 audit: BPF prog-id=66 op=LOAD Dec 16 12:07:36.803000 audit: BPF prog-id=67 op=LOAD Dec 16 12:07:36.806000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:07:36.806000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:07:36.806000 audit: BPF prog-id=68 op=LOAD Dec 16 12:07:36.806000 audit: BPF prog-id=69 op=LOAD Dec 16 12:07:36.806000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:07:36.806000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:07:36.807000 audit: BPF prog-id=70 op=LOAD Dec 16 12:07:36.807000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:07:36.807000 audit: BPF prog-id=71 op=LOAD Dec 16 12:07:36.808000 audit: BPF prog-id=72 op=LOAD Dec 16 12:07:36.808000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:07:36.808000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:07:36.808000 audit: BPF prog-id=73 op=LOAD Dec 16 12:07:36.808000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:07:36.809000 audit: BPF prog-id=74 op=LOAD Dec 16 12:07:36.809000 audit: BPF prog-id=75 op=LOAD Dec 16 12:07:36.809000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:07:36.809000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:07:36.809000 audit: BPF prog-id=76 op=LOAD Dec 16 12:07:36.809000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:07:36.811000 audit: BPF prog-id=77 op=LOAD Dec 16 12:07:36.811000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:07:36.811000 audit: BPF prog-id=78 op=LOAD Dec 16 12:07:36.811000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:07:36.811000 audit: BPF prog-id=79 op=LOAD Dec 16 12:07:36.811000 audit: BPF prog-id=80 op=LOAD Dec 16 12:07:36.811000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:07:36.811000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:07:36.822205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:07:36.822311 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:07:36.824078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:36.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:07:36.826203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:07:36.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:36.968238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:36.982309 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:07:37.013667 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:07:37.013667 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:07:37.013667 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:07:37.014002 kubelet[2339]: I1216 12:07:37.013716 2339 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:07:37.416773 kubelet[2339]: I1216 12:07:37.416724 2339 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:07:37.416773 kubelet[2339]: I1216 12:07:37.416758 2339 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:07:37.417053 kubelet[2339]: I1216 12:07:37.417006 2339 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:07:37.433133 kubelet[2339]: E1216 12:07:37.433084 2339 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:07:37.434419 kubelet[2339]: I1216 12:07:37.434390 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:07:37.444068 kubelet[2339]: I1216 12:07:37.443612 2339 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:07:37.446139 kubelet[2339]: I1216 12:07:37.446118 2339 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:07:37.446448 kubelet[2339]: I1216 12:07:37.446409 2339 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:07:37.446585 kubelet[2339]: I1216 12:07:37.446439 2339 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:07:37.446690 kubelet[2339]: I1216 12:07:37.446654 2339 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:07:37.446690 kubelet[2339]: I1216 12:07:37.446663 2339 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:07:37.446853 kubelet[2339]: I1216 12:07:37.446837 2339 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:07:37.449745 kubelet[2339]: I1216 12:07:37.449707 2339 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:07:37.449745 kubelet[2339]: I1216 12:07:37.449734 2339 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:07:37.449957 kubelet[2339]: I1216 12:07:37.449757 2339 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:07:37.450789 kubelet[2339]: I1216 12:07:37.450770 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:07:37.452073 kubelet[2339]: I1216 12:07:37.452053 2339 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:07:37.452975 kubelet[2339]: I1216 12:07:37.452819 2339 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:07:37.452975 kubelet[2339]: W1216 12:07:37.452942 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:07:37.453946 kubelet[2339]: E1216 12:07:37.453918 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:07:37.455345 kubelet[2339]: I1216 12:07:37.455307 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:07:37.455345 kubelet[2339]: I1216 12:07:37.455350 2339 server.go:1289] "Started kubelet" Dec 16 12:07:37.456692 kubelet[2339]: I1216 12:07:37.456673 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:07:37.457601 kubelet[2339]: E1216 12:07:37.457557 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:07:37.458693 kubelet[2339]: I1216 12:07:37.458642 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:07:37.459559 kubelet[2339]: I1216 12:07:37.459539 2339 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:07:37.461865 kubelet[2339]: E1216 12:07:37.460120 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b0bda427c833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:07:37.455323187 +0000 UTC m=+0.469690181,LastTimestamp:2025-12-16 12:07:37.455323187 +0000 UTC m=+0.469690181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:07:37.461961 kubelet[2339]: I1216 12:07:37.461698 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:07:37.462124 kubelet[2339]: I1216 12:07:37.462107 2339 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:07:37.462287 kubelet[2339]: I1216 12:07:37.462238 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:07:37.462887 kubelet[2339]: E1216 12:07:37.462688 2339 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:07:37.463299 kubelet[2339]: E1216 12:07:37.463281 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:07:37.463343 kubelet[2339]: I1216 12:07:37.463316 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:07:37.463496 kubelet[2339]: I1216 12:07:37.463477 2339 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:07:37.463545 kubelet[2339]: I1216 12:07:37.463527 2339 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:07:37.462000 audit[2356]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.462000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffe2d9b70 a2=0 a3=0 items=0 ppid=2339 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:07:37.464206 kubelet[2339]: E1216 12:07:37.464101 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:07:37.464298 kubelet[2339]: I1216 12:07:37.464276 2339 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:07:37.464378 kubelet[2339]: I1216 12:07:37.464363 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:07:37.465019 kubelet[2339]: E1216 12:07:37.464990 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Dec 16 12:07:37.464000 audit[2357]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.464000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe4ca2c0 a2=0 a3=0 items=0 ppid=2339 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:07:37.465693 kubelet[2339]: I1216 12:07:37.465674 2339 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:07:37.466000 audit[2360]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.466000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc1311ee0 a2=0 a3=0 items=0 ppid=2339 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:07:37.467000 audit[2362]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.467000 audit[2362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcc338320 a2=0 a3=0 items=0 ppid=2339 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:07:37.474000 audit[2367]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.474000 audit[2367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffddcbbfe0 a2=0 a3=0 items=0 ppid=2339 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:07:37.476851 kubelet[2339]: I1216 12:07:37.476797 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:07:37.476000 audit[2370]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:37.476000 audit[2372]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.476000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb585490 a2=0 a3=0 items=0 ppid=2339 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:07:37.476000 audit[2370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff860f5d0 a2=0 a3=0 items=0 ppid=2339 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:07:37.478601 kubelet[2339]: I1216 12:07:37.478573 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:07:37.478657 kubelet[2339]: I1216 12:07:37.478608 2339 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:07:37.478657 kubelet[2339]: I1216 12:07:37.478625 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:07:37.478657 kubelet[2339]: I1216 12:07:37.478633 2339 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:07:37.478857 kubelet[2339]: E1216 12:07:37.478698 2339 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:07:37.479076 kubelet[2339]: E1216 12:07:37.479049 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:07:37.478000 audit[2373]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.478000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9b114b0 a2=0 a3=0 items=0 ppid=2339 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:07:37.479749 kubelet[2339]: I1216 12:07:37.479734 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:07:37.479749 kubelet[2339]: I1216 12:07:37.479744 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:07:37.479852 kubelet[2339]: I1216 12:07:37.479761 2339 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:07:37.478000 audit[2374]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:37.478000 audit[2374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff801d7b0 a2=0 a3=0 items=0 ppid=2339 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:07:37.479000 audit[2376]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:37.479000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffccc53aa0 a2=0 a3=0 items=0 ppid=2339 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:07:37.480000 audit[2375]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:37.480000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1dd61e0 a2=0 a3=0 items=0 ppid=2339 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:07:37.480000 audit[2377]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:37.480000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffef51aa50 a2=0 a3=0 items=0 ppid=2339 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:37.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:07:37.563458 kubelet[2339]: E1216 12:07:37.563401 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:07:37.579745 kubelet[2339]: E1216 12:07:37.579689 2339 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:07:37.601214 kubelet[2339]: I1216 12:07:37.601181 2339 policy_none.go:49] "None policy: Start" Dec 16 12:07:37.601214 kubelet[2339]: I1216 12:07:37.601206 2339 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:07:37.601214 kubelet[2339]: I1216 12:07:37.601220 2339 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:07:37.605901 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:07:37.623636 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:07:37.626193 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:07:37.649878 kubelet[2339]: E1216 12:07:37.649841 2339 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:07:37.650261 kubelet[2339]: I1216 12:07:37.650067 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:07:37.650261 kubelet[2339]: I1216 12:07:37.650084 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:07:37.650261 kubelet[2339]: I1216 12:07:37.650250 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:07:37.651077 kubelet[2339]: E1216 12:07:37.651052 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:07:37.651077 kubelet[2339]: E1216 12:07:37.651092 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:07:37.665458 kubelet[2339]: E1216 12:07:37.665416 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Dec 16 12:07:37.751881 kubelet[2339]: I1216 12:07:37.751768 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:07:37.752306 kubelet[2339]: E1216 12:07:37.752266 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Dec 16 12:07:37.813043 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 12:07:37.828472 kubelet[2339]: E1216 12:07:37.828433 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:37.831954 systemd[1]: Created slice kubepods-burstable-pod77169e19f4ff8c4ff257886f9e69b580.slice - libcontainer container kubepods-burstable-pod77169e19f4ff8c4ff257886f9e69b580.slice. Dec 16 12:07:37.833919 kubelet[2339]: E1216 12:07:37.833877 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:37.841751 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 12:07:37.843521 kubelet[2339]: E1216 12:07:37.843497 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:37.865824 kubelet[2339]: I1216 12:07:37.865708 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:37.865824 kubelet[2339]: I1216 12:07:37.865752 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:37.865824 kubelet[2339]: I1216 12:07:37.865770 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:07:37.865824 kubelet[2339]: I1216 12:07:37.865788 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77169e19f4ff8c4ff257886f9e69b580-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77169e19f4ff8c4ff257886f9e69b580\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:37.866044 kubelet[2339]: I1216 12:07:37.865841 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:37.866044 kubelet[2339]: I1216 12:07:37.865872 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:37.866044 kubelet[2339]: I1216 12:07:37.865930 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77169e19f4ff8c4ff257886f9e69b580-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77169e19f4ff8c4ff257886f9e69b580\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:37.866044 kubelet[2339]: I1216 12:07:37.865973 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77169e19f4ff8c4ff257886f9e69b580-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77169e19f4ff8c4ff257886f9e69b580\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:37.866044 kubelet[2339]: I1216 12:07:37.865991 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:37.954213 kubelet[2339]: I1216 12:07:37.954171 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:07:37.954640 kubelet[2339]: E1216 12:07:37.954607 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Dec 16 12:07:38.066883 kubelet[2339]: E1216 12:07:38.066771 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Dec 16 12:07:38.129331 kubelet[2339]: E1216 12:07:38.129291 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.130077 containerd[1557]: time="2025-12-16T12:07:38.129877736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:07:38.134422 kubelet[2339]: E1216 12:07:38.134398 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.134860 containerd[1557]: time="2025-12-16T12:07:38.134826592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77169e19f4ff8c4ff257886f9e69b580,Namespace:kube-system,Attempt:0,}" Dec 16 12:07:38.144192 kubelet[2339]: E1216 12:07:38.144152 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.144669 containerd[1557]: time="2025-12-16T12:07:38.144632125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:07:38.152707 containerd[1557]: time="2025-12-16T12:07:38.152646724Z" level=info msg="connecting to shim ef4efa4930dd15756c2d6bffa640f1d390c96e6ee32929b5e47d787f04b8a327" address="unix:///run/containerd/s/bf67505e81d6a74bc3d9cfe08c4f98aa2efb44cc9c8f359e30a93ebf25a8d57c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:07:38.177039 containerd[1557]: time="2025-12-16T12:07:38.176911059Z" level=info msg="connecting to shim a2f3c606f2ead34922b47a67e05cc2cc97268903a20b5cc31882193a32fce432" address="unix:///run/containerd/s/ac1c031252ab4aa9e3d86692dfd0e9eef1c44ae0d456588ef500692ccb511539" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:07:38.184262 systemd[1]: Started cri-containerd-ef4efa4930dd15756c2d6bffa640f1d390c96e6ee32929b5e47d787f04b8a327.scope - libcontainer container ef4efa4930dd15756c2d6bffa640f1d390c96e6ee32929b5e47d787f04b8a327. Dec 16 12:07:38.185305 containerd[1557]: time="2025-12-16T12:07:38.185267074Z" level=info msg="connecting to shim 386ee92af41ad704d9245026a669b8b20cb64a91abaa3433ad94b11a71344a6e" address="unix:///run/containerd/s/48d684c143e7fd901de3e2a639227ed122efd1bda359ce5e2c06ef7851754eda" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:07:38.197000 audit: BPF prog-id=81 op=LOAD Dec 16 12:07:38.198000 audit: BPF prog-id=82 op=LOAD Dec 16 12:07:38.198000 audit[2397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.198000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:07:38.198000 audit[2397]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.199000 audit: BPF prog-id=83 op=LOAD Dec 16 12:07:38.199000 audit[2397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.199000 audit: BPF prog-id=84 op=LOAD Dec 16 12:07:38.199000 audit[2397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.199000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:07:38.199000 audit[2397]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.199000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:07:38.199000 audit[2397]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.199000 audit: BPF prog-id=85 op=LOAD Dec 16 12:07:38.199000 audit[2397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2385 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566346566613439333064643135373536633264366266666136343066 Dec 16 12:07:38.213770 systemd[1]: Started cri-containerd-386ee92af41ad704d9245026a669b8b20cb64a91abaa3433ad94b11a71344a6e.scope - libcontainer container 386ee92af41ad704d9245026a669b8b20cb64a91abaa3433ad94b11a71344a6e. Dec 16 12:07:38.214957 systemd[1]: Started cri-containerd-a2f3c606f2ead34922b47a67e05cc2cc97268903a20b5cc31882193a32fce432.scope - libcontainer container a2f3c606f2ead34922b47a67e05cc2cc97268903a20b5cc31882193a32fce432. Dec 16 12:07:38.226000 audit: BPF prog-id=86 op=LOAD Dec 16 12:07:38.228999 containerd[1557]: time="2025-12-16T12:07:38.228965923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef4efa4930dd15756c2d6bffa640f1d390c96e6ee32929b5e47d787f04b8a327\"" Dec 16 12:07:38.228000 audit: BPF prog-id=87 op=LOAD Dec 16 12:07:38.227000 audit: BPF prog-id=88 op=LOAD Dec 16 12:07:38.227000 audit[2434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.229000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:07:38.229000 audit[2434]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.229000 audit: BPF prog-id=89 op=LOAD Dec 16 12:07:38.229000 audit[2434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.229000 audit: BPF prog-id=90 op=LOAD Dec 16 12:07:38.229000 audit[2434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.229000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:07:38.229000 audit[2434]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.229000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:07:38.229000 audit[2434]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.229000 audit: BPF prog-id=91 op=LOAD Dec 16 12:07:38.229000 audit[2434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2418 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663363363036663265616433343932326234376136376530356363 Dec 16 12:07:38.231115 kubelet[2339]: E1216 12:07:38.230567 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.230000 audit: BPF prog-id=92 op=LOAD Dec 16 12:07:38.230000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.230000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:07:38.230000 audit[2461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.231000 audit: BPF prog-id=93 op=LOAD Dec 16 12:07:38.231000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.231000 audit: BPF prog-id=94 op=LOAD Dec 16 12:07:38.231000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.231000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:07:38.231000 audit[2461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.232000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:07:38.232000 audit[2461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.232000 audit: BPF prog-id=95 op=LOAD Dec 16 12:07:38.232000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366565393261663431616437303464393234353032366136363962 Dec 16 12:07:38.238574 containerd[1557]: time="2025-12-16T12:07:38.238506929Z" level=info msg="CreateContainer within sandbox \"ef4efa4930dd15756c2d6bffa640f1d390c96e6ee32929b5e47d787f04b8a327\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:07:38.249199 containerd[1557]: time="2025-12-16T12:07:38.249048809Z" level=info msg="Container 1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:07:38.259831 containerd[1557]: time="2025-12-16T12:07:38.259719370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77169e19f4ff8c4ff257886f9e69b580,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2f3c606f2ead34922b47a67e05cc2cc97268903a20b5cc31882193a32fce432\"" Dec 16 12:07:38.260588 kubelet[2339]: E1216 12:07:38.260549 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.263318 containerd[1557]: time="2025-12-16T12:07:38.263050321Z" level=info msg="CreateContainer within sandbox \"ef4efa4930dd15756c2d6bffa640f1d390c96e6ee32929b5e47d787f04b8a327\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05\"" Dec 16 12:07:38.263508 containerd[1557]: time="2025-12-16T12:07:38.263458139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"386ee92af41ad704d9245026a669b8b20cb64a91abaa3433ad94b11a71344a6e\"" Dec 16 12:07:38.263795 containerd[1557]: time="2025-12-16T12:07:38.263762772Z" level=info msg="StartContainer for \"1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05\"" Dec 16 12:07:38.264488 containerd[1557]: time="2025-12-16T12:07:38.264463376Z" level=info msg="CreateContainer within sandbox \"a2f3c606f2ead34922b47a67e05cc2cc97268903a20b5cc31882193a32fce432\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:07:38.264791 kubelet[2339]: E1216 12:07:38.264728 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.265943 containerd[1557]: time="2025-12-16T12:07:38.265883036Z" level=info msg="connecting to shim 1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05" address="unix:///run/containerd/s/bf67505e81d6a74bc3d9cfe08c4f98aa2efb44cc9c8f359e30a93ebf25a8d57c" protocol=ttrpc version=3 Dec 16 12:07:38.268258 containerd[1557]: time="2025-12-16T12:07:38.268225120Z" level=info msg="CreateContainer within sandbox \"386ee92af41ad704d9245026a669b8b20cb64a91abaa3433ad94b11a71344a6e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:07:38.276814 containerd[1557]: time="2025-12-16T12:07:38.276779180Z" level=info msg="Container d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:07:38.281684 containerd[1557]: time="2025-12-16T12:07:38.281593471Z" level=info msg="Container 148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:07:38.284220 systemd[1]: Started cri-containerd-1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05.scope - libcontainer container 1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05. Dec 16 12:07:38.285807 containerd[1557]: time="2025-12-16T12:07:38.285766355Z" level=info msg="CreateContainer within sandbox \"386ee92af41ad704d9245026a669b8b20cb64a91abaa3433ad94b11a71344a6e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979\"" Dec 16 12:07:38.287126 containerd[1557]: time="2025-12-16T12:07:38.286249821Z" level=info msg="StartContainer for \"d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979\"" Dec 16 12:07:38.287744 containerd[1557]: time="2025-12-16T12:07:38.287711588Z" level=info msg="connecting to shim d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979" address="unix:///run/containerd/s/48d684c143e7fd901de3e2a639227ed122efd1bda359ce5e2c06ef7851754eda" protocol=ttrpc version=3 Dec 16 12:07:38.290755 containerd[1557]: time="2025-12-16T12:07:38.290722576Z" level=info msg="CreateContainer within sandbox \"a2f3c606f2ead34922b47a67e05cc2cc97268903a20b5cc31882193a32fce432\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb\"" Dec 16 12:07:38.291260 containerd[1557]: time="2025-12-16T12:07:38.291233099Z" level=info msg="StartContainer for \"148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb\"" Dec 16 12:07:38.292516 containerd[1557]: time="2025-12-16T12:07:38.292486934Z" level=info msg="connecting to shim 148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb" address="unix:///run/containerd/s/ac1c031252ab4aa9e3d86692dfd0e9eef1c44ae0d456588ef500692ccb511539" protocol=ttrpc version=3 Dec 16 12:07:38.294062 kubelet[2339]: E1216 12:07:38.293889 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:07:38.301000 audit: BPF prog-id=96 op=LOAD Dec 16 12:07:38.302000 audit: BPF prog-id=97 op=LOAD Dec 16 12:07:38.302000 audit[2515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.302000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:07:38.302000 audit[2515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.303000 audit: BPF prog-id=98 op=LOAD Dec 16 12:07:38.303000 audit[2515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.303000 audit: BPF prog-id=99 op=LOAD Dec 16 12:07:38.303000 audit[2515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.304000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:07:38.304000 audit[2515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.305000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:07:38.305000 audit[2515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.305000 audit: BPF prog-id=100 op=LOAD Dec 16 12:07:38.305000 audit[2515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2385 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343963643334393533343564393862316230613264323531323266 Dec 16 12:07:38.309246 systemd[1]: Started cri-containerd-d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979.scope - libcontainer container d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979. Dec 16 12:07:38.312855 systemd[1]: Started cri-containerd-148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb.scope - libcontainer container 148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb. Dec 16 12:07:38.323000 audit: BPF prog-id=101 op=LOAD Dec 16 12:07:38.324000 audit: BPF prog-id=102 op=LOAD Dec 16 12:07:38.325000 audit: BPF prog-id=103 op=LOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=104 op=LOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=105 op=LOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=106 op=LOAD Dec 16 12:07:38.325000 audit[2533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2436 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437323961626630303063326464656633393866636637636163393730 Dec 16 12:07:38.325000 audit: BPF prog-id=107 op=LOAD Dec 16 12:07:38.325000 audit[2543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.326000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:07:38.326000 audit[2543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.328000 audit: BPF prog-id=108 op=LOAD Dec 16 12:07:38.328000 audit[2543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.328000 audit: BPF prog-id=109 op=LOAD Dec 16 12:07:38.328000 audit[2543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.328000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:07:38.328000 audit[2543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.328000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:07:38.328000 audit[2543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.328000 audit: BPF prog-id=110 op=LOAD Dec 16 12:07:38.328000 audit[2543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2418 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134383537366665313661393839396439613065633730363139323936 Dec 16 12:07:38.343249 containerd[1557]: time="2025-12-16T12:07:38.342753865Z" level=info msg="StartContainer for \"1649cd3495345d98b1b0a2d25122f128f5057f982ad887abbf9c06c173288c05\" returns successfully" Dec 16 12:07:38.357073 kubelet[2339]: I1216 12:07:38.356367 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:07:38.357608 kubelet[2339]: E1216 12:07:38.357578 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Dec 16 12:07:38.363467 containerd[1557]: time="2025-12-16T12:07:38.363426404Z" level=info msg="StartContainer for \"148576fe16a9899d9a0ec7061929652c2de9c777a379425e33aaadf374c8cabb\" returns successfully" Dec 16 12:07:38.367979 containerd[1557]: time="2025-12-16T12:07:38.367817987Z" level=info msg="StartContainer for \"d729abf000c2ddef398fcf7cac970af3cb76d51d32878b729dc16e2db18b8979\" returns successfully" Dec 16 12:07:38.491335 kubelet[2339]: E1216 12:07:38.491297 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:38.491455 kubelet[2339]: E1216 12:07:38.491438 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.493487 kubelet[2339]: E1216 12:07:38.493455 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:38.493695 kubelet[2339]: E1216 12:07:38.493570 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:38.495413 kubelet[2339]: E1216 12:07:38.495390 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:38.495515 kubelet[2339]: E1216 12:07:38.495489 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:39.160892 kubelet[2339]: I1216 12:07:39.160866 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:07:39.498424 kubelet[2339]: E1216 12:07:39.498320 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:39.498526 kubelet[2339]: E1216 12:07:39.498468 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:39.498742 kubelet[2339]: E1216 12:07:39.498707 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:07:39.498812 kubelet[2339]: E1216 12:07:39.498798 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:40.304251 kubelet[2339]: E1216 12:07:40.304218 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:07:40.318894 kubelet[2339]: I1216 12:07:40.318478 2339 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:07:40.318894 kubelet[2339]: E1216 12:07:40.318520 2339 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:07:40.364723 kubelet[2339]: I1216 12:07:40.364675 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:40.371038 kubelet[2339]: E1216 12:07:40.370929 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:40.371038 kubelet[2339]: I1216 12:07:40.370958 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:07:40.374326 kubelet[2339]: E1216 12:07:40.374070 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:07:40.374326 kubelet[2339]: I1216 12:07:40.374098 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:40.376856 kubelet[2339]: E1216 12:07:40.376823 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:40.452747 kubelet[2339]: I1216 12:07:40.452492 2339 apiserver.go:52] "Watching apiserver" Dec 16 12:07:40.464246 kubelet[2339]: I1216 12:07:40.464212 2339 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:07:42.436870 kubelet[2339]: I1216 12:07:42.436828 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:42.445600 kubelet[2339]: E1216 12:07:42.445563 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:42.501310 kubelet[2339]: E1216 12:07:42.501276 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:42.919793 systemd[1]: Reload requested from client PID 2625 ('systemctl') (unit session-8.scope)... Dec 16 12:07:42.919809 systemd[1]: Reloading... Dec 16 12:07:42.998081 zram_generator::config[2674]: No configuration found. Dec 16 12:07:43.172056 systemd[1]: Reloading finished in 251 ms. Dec 16 12:07:43.202529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:07:43.215156 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:07:43.215437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:43.219190 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:07:43.219239 kernel: audit: type=1131 audit(1765886863.214:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:43.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:43.215505 systemd[1]: kubelet.service: Consumed 864ms CPU time, 127.7M memory peak. Dec 16 12:07:43.219162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:07:43.219000 audit: BPF prog-id=111 op=LOAD Dec 16 12:07:43.219000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:07:43.221772 kernel: audit: type=1334 audit(1765886863.219:382): prog-id=111 op=LOAD Dec 16 12:07:43.221809 kernel: audit: type=1334 audit(1765886863.219:383): prog-id=73 op=UNLOAD Dec 16 12:07:43.221826 kernel: audit: type=1334 audit(1765886863.219:384): prog-id=112 op=LOAD Dec 16 12:07:43.221843 kernel: audit: type=1334 audit(1765886863.220:385): prog-id=113 op=LOAD Dec 16 12:07:43.221863 kernel: audit: type=1334 audit(1765886863.220:386): prog-id=74 op=UNLOAD Dec 16 12:07:43.219000 audit: BPF prog-id=112 op=LOAD Dec 16 12:07:43.220000 audit: BPF prog-id=113 op=LOAD Dec 16 12:07:43.220000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:07:43.220000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:07:43.225345 kernel: audit: type=1334 audit(1765886863.220:387): prog-id=75 op=UNLOAD Dec 16 12:07:43.225377 kernel: audit: type=1334 audit(1765886863.223:388): prog-id=114 op=LOAD Dec 16 12:07:43.223000 audit: BPF prog-id=114 op=LOAD Dec 16 12:07:43.223000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:07:43.227167 kernel: audit: type=1334 audit(1765886863.223:389): prog-id=64 op=UNLOAD Dec 16 12:07:43.227194 kernel: audit: type=1334 audit(1765886863.223:390): prog-id=115 op=LOAD Dec 16 12:07:43.223000 audit: BPF prog-id=115 op=LOAD Dec 16 12:07:43.237000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:07:43.237000 audit: BPF prog-id=116 op=LOAD Dec 16 12:07:43.237000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:07:43.238000 audit: BPF prog-id=117 op=LOAD Dec 16 12:07:43.238000 audit: BPF prog-id=118 op=LOAD Dec 16 12:07:43.238000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:07:43.238000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:07:43.238000 audit: BPF prog-id=119 op=LOAD Dec 16 12:07:43.239000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:07:43.239000 audit: BPF prog-id=120 op=LOAD Dec 16 12:07:43.239000 audit: BPF prog-id=121 op=LOAD Dec 16 12:07:43.239000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:07:43.239000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:07:43.239000 audit: BPF prog-id=122 op=LOAD Dec 16 12:07:43.239000 audit: BPF prog-id=123 op=LOAD Dec 16 12:07:43.239000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:07:43.239000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:07:43.240000 audit: BPF prog-id=124 op=LOAD Dec 16 12:07:43.240000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:07:43.241000 audit: BPF prog-id=125 op=LOAD Dec 16 12:07:43.241000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:07:43.241000 audit: BPF prog-id=126 op=LOAD Dec 16 12:07:43.241000 audit: BPF prog-id=127 op=LOAD Dec 16 12:07:43.241000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:07:43.241000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:07:43.242000 audit: BPF prog-id=128 op=LOAD Dec 16 12:07:43.242000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:07:43.242000 audit: BPF prog-id=129 op=LOAD Dec 16 12:07:43.242000 audit: BPF prog-id=130 op=LOAD Dec 16 12:07:43.242000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:07:43.242000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:07:43.375756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:07:43.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:43.379793 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:07:43.490667 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:07:43.490667 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:07:43.490667 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:07:43.490667 kubelet[2713]: I1216 12:07:43.490627 2713 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:07:43.497153 kubelet[2713]: I1216 12:07:43.497119 2713 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:07:43.497153 kubelet[2713]: I1216 12:07:43.497150 2713 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:07:43.497394 kubelet[2713]: I1216 12:07:43.497378 2713 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:07:43.498663 kubelet[2713]: I1216 12:07:43.498635 2713 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:07:43.501082 kubelet[2713]: I1216 12:07:43.501034 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:07:43.504984 kubelet[2713]: I1216 12:07:43.504959 2713 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:07:43.508055 kubelet[2713]: I1216 12:07:43.508009 2713 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:07:43.508389 kubelet[2713]: I1216 12:07:43.508359 2713 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:07:43.508626 kubelet[2713]: I1216 12:07:43.508455 2713 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:07:43.508742 kubelet[2713]: I1216 12:07:43.508731 2713 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:07:43.508787 kubelet[2713]: I1216 12:07:43.508780 2713 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:07:43.508874 kubelet[2713]: I1216 12:07:43.508865 2713 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:07:43.509130 kubelet[2713]: I1216 12:07:43.509114 2713 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:07:43.509216 kubelet[2713]: I1216 12:07:43.509204 2713 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:07:43.509304 kubelet[2713]: I1216 12:07:43.509294 2713 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:07:43.509569 kubelet[2713]: I1216 12:07:43.509554 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:07:43.513595 kubelet[2713]: I1216 12:07:43.513489 2713 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:07:43.514320 kubelet[2713]: I1216 12:07:43.514249 2713 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:07:43.524131 kubelet[2713]: I1216 12:07:43.524102 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:07:43.526095 kubelet[2713]: I1216 12:07:43.526083 2713 server.go:1289] "Started kubelet" Dec 16 12:07:43.530405 kubelet[2713]: I1216 12:07:43.529448 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:07:43.531950 kubelet[2713]: I1216 12:07:43.531920 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:07:43.534178 kubelet[2713]: I1216 12:07:43.529419 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:07:43.534336 kubelet[2713]: E1216 12:07:43.534315 2713 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:07:43.534711 kubelet[2713]: I1216 12:07:43.534692 2713 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:07:43.534765 kubelet[2713]: I1216 12:07:43.529469 2713 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:07:43.534957 kubelet[2713]: I1216 12:07:43.534835 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:07:43.534957 kubelet[2713]: I1216 12:07:43.534932 2713 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:07:43.535855 kubelet[2713]: I1216 12:07:43.535783 2713 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:07:43.536423 kubelet[2713]: I1216 12:07:43.536346 2713 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:07:43.536847 kubelet[2713]: I1216 12:07:43.536827 2713 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:07:43.536994 kubelet[2713]: I1216 12:07:43.536926 2713 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:07:43.540968 kubelet[2713]: I1216 12:07:43.540937 2713 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:07:43.549719 kubelet[2713]: I1216 12:07:43.549676 2713 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:07:43.551396 kubelet[2713]: I1216 12:07:43.551367 2713 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:07:43.551448 kubelet[2713]: I1216 12:07:43.551407 2713 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:07:43.551448 kubelet[2713]: I1216 12:07:43.551429 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:07:43.551448 kubelet[2713]: I1216 12:07:43.551437 2713 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:07:43.551549 kubelet[2713]: E1216 12:07:43.551484 2713 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:07:43.591943 kubelet[2713]: I1216 12:07:43.591914 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:07:43.591943 kubelet[2713]: I1216 12:07:43.591934 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:07:43.592646 kubelet[2713]: I1216 12:07:43.592626 2713 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:07:43.592795 kubelet[2713]: I1216 12:07:43.592779 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:07:43.592824 kubelet[2713]: I1216 12:07:43.592793 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:07:43.592824 kubelet[2713]: I1216 12:07:43.592811 2713 policy_none.go:49] "None policy: Start" Dec 16 12:07:43.592824 kubelet[2713]: I1216 12:07:43.592822 2713 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:07:43.592882 kubelet[2713]: I1216 12:07:43.592831 2713 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:07:43.592935 kubelet[2713]: I1216 12:07:43.592923 2713 state_mem.go:75] "Updated machine memory state" Dec 16 12:07:43.598352 kubelet[2713]: E1216 12:07:43.598325 2713 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:07:43.599552 kubelet[2713]: I1216 12:07:43.599520 2713 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:07:43.599615 kubelet[2713]: I1216 12:07:43.599536 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:07:43.600688 kubelet[2713]: I1216 12:07:43.600662 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:07:43.600907 kubelet[2713]: E1216 12:07:43.600890 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:07:43.653178 kubelet[2713]: I1216 12:07:43.653043 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:07:43.654810 kubelet[2713]: I1216 12:07:43.653080 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:43.654810 kubelet[2713]: I1216 12:07:43.653152 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:43.660144 kubelet[2713]: E1216 12:07:43.660103 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:43.710107 kubelet[2713]: I1216 12:07:43.710080 2713 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:07:43.716203 kubelet[2713]: I1216 12:07:43.716167 2713 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:07:43.716323 kubelet[2713]: I1216 12:07:43.716307 2713 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:07:43.737918 kubelet[2713]: I1216 12:07:43.737882 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:43.737918 kubelet[2713]: I1216 12:07:43.737923 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:43.738133 kubelet[2713]: I1216 12:07:43.737942 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:43.738133 kubelet[2713]: I1216 12:07:43.738010 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77169e19f4ff8c4ff257886f9e69b580-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77169e19f4ff8c4ff257886f9e69b580\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:43.738133 kubelet[2713]: I1216 12:07:43.738061 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77169e19f4ff8c4ff257886f9e69b580-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77169e19f4ff8c4ff257886f9e69b580\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:43.738133 kubelet[2713]: I1216 12:07:43.738083 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77169e19f4ff8c4ff257886f9e69b580-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77169e19f4ff8c4ff257886f9e69b580\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:43.738133 kubelet[2713]: I1216 12:07:43.738099 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:43.738280 kubelet[2713]: I1216 12:07:43.738115 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:07:43.738280 kubelet[2713]: I1216 12:07:43.738143 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:07:43.958253 kubelet[2713]: E1216 12:07:43.958122 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:43.959179 kubelet[2713]: E1216 12:07:43.959153 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:43.960446 kubelet[2713]: E1216 12:07:43.960397 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:44.513569 kubelet[2713]: I1216 12:07:44.513512 2713 apiserver.go:52] "Watching apiserver" Dec 16 12:07:44.535367 kubelet[2713]: I1216 12:07:44.535311 2713 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:07:44.576220 kubelet[2713]: I1216 12:07:44.575385 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:44.576220 kubelet[2713]: E1216 12:07:44.575859 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:44.576220 kubelet[2713]: E1216 12:07:44.576116 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:44.698859 kubelet[2713]: E1216 12:07:44.698827 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:07:44.699581 kubelet[2713]: E1216 12:07:44.699532 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:44.729272 kubelet[2713]: I1216 12:07:44.729211 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.729194776 podStartE2EDuration="1.729194776s" podCreationTimestamp="2025-12-16 12:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:07:44.720323599 +0000 UTC m=+1.336725050" watchObservedRunningTime="2025-12-16 12:07:44.729194776 +0000 UTC m=+1.345596187" Dec 16 12:07:44.741870 kubelet[2713]: I1216 12:07:44.741792 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.7417761499999997 podStartE2EDuration="2.74177615s" podCreationTimestamp="2025-12-16 12:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:07:44.741230995 +0000 UTC m=+1.357632406" watchObservedRunningTime="2025-12-16 12:07:44.74177615 +0000 UTC m=+1.358177561" Dec 16 12:07:44.742149 kubelet[2713]: I1216 12:07:44.742081 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7420740380000002 podStartE2EDuration="1.742074038s" podCreationTimestamp="2025-12-16 12:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:07:44.729432679 +0000 UTC m=+1.345834090" watchObservedRunningTime="2025-12-16 12:07:44.742074038 +0000 UTC m=+1.358475449" Dec 16 12:07:45.576754 kubelet[2713]: E1216 12:07:45.576727 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:45.577154 kubelet[2713]: E1216 12:07:45.576810 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:45.577154 kubelet[2713]: E1216 12:07:45.576916 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:47.563404 kubelet[2713]: E1216 12:07:47.563361 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:48.327756 kubelet[2713]: I1216 12:07:48.327718 2713 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:07:48.332817 containerd[1557]: time="2025-12-16T12:07:48.332754659Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:07:48.333852 kubelet[2713]: I1216 12:07:48.333329 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:07:49.123221 systemd[1]: Created slice kubepods-besteffort-pod805599ea_7b6d_48b9_839b_3b53bfbe9192.slice - libcontainer container kubepods-besteffort-pod805599ea_7b6d_48b9_839b_3b53bfbe9192.slice. Dec 16 12:07:49.184683 kubelet[2713]: I1216 12:07:49.184550 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/805599ea-7b6d-48b9-839b-3b53bfbe9192-xtables-lock\") pod \"kube-proxy-ktcjp\" (UID: \"805599ea-7b6d-48b9-839b-3b53bfbe9192\") " pod="kube-system/kube-proxy-ktcjp" Dec 16 12:07:49.184683 kubelet[2713]: I1216 12:07:49.184590 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/805599ea-7b6d-48b9-839b-3b53bfbe9192-lib-modules\") pod \"kube-proxy-ktcjp\" (UID: \"805599ea-7b6d-48b9-839b-3b53bfbe9192\") " pod="kube-system/kube-proxy-ktcjp" Dec 16 12:07:49.184683 kubelet[2713]: I1216 12:07:49.184617 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wftw\" (UniqueName: \"kubernetes.io/projected/805599ea-7b6d-48b9-839b-3b53bfbe9192-kube-api-access-7wftw\") pod \"kube-proxy-ktcjp\" (UID: \"805599ea-7b6d-48b9-839b-3b53bfbe9192\") " pod="kube-system/kube-proxy-ktcjp" Dec 16 12:07:49.184683 kubelet[2713]: I1216 12:07:49.184643 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/805599ea-7b6d-48b9-839b-3b53bfbe9192-kube-proxy\") pod \"kube-proxy-ktcjp\" (UID: \"805599ea-7b6d-48b9-839b-3b53bfbe9192\") " pod="kube-system/kube-proxy-ktcjp" Dec 16 12:07:49.438931 kubelet[2713]: E1216 12:07:49.438168 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:49.439172 containerd[1557]: time="2025-12-16T12:07:49.439130667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktcjp,Uid:805599ea-7b6d-48b9-839b-3b53bfbe9192,Namespace:kube-system,Attempt:0,}" Dec 16 12:07:49.608541 containerd[1557]: time="2025-12-16T12:07:49.608491569Z" level=info msg="connecting to shim 03936893debc0127c5a28bdb87c4673829b76f3dbf3eae743f335b5d0dcd5cce" address="unix:///run/containerd/s/40c90cfedec1c9915fcbbc4293e928b86cf7ad77657f283c6a2b74e1036e8e89" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:07:49.643308 systemd[1]: Started cri-containerd-03936893debc0127c5a28bdb87c4673829b76f3dbf3eae743f335b5d0dcd5cce.scope - libcontainer container 03936893debc0127c5a28bdb87c4673829b76f3dbf3eae743f335b5d0dcd5cce. Dec 16 12:07:49.650562 systemd[1]: Created slice kubepods-besteffort-podc3b3c7f6_0a0a_404c_927b_55e4aa40fd0f.slice - libcontainer container kubepods-besteffort-podc3b3c7f6_0a0a_404c_927b_55e4aa40fd0f.slice. Dec 16 12:07:49.671000 audit: BPF prog-id=131 op=LOAD Dec 16 12:07:49.673638 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:07:49.673727 kernel: audit: type=1334 audit(1765886869.671:423): prog-id=131 op=LOAD Dec 16 12:07:49.672000 audit: BPF prog-id=132 op=LOAD Dec 16 12:07:49.677051 kernel: audit: type=1334 audit(1765886869.672:424): prog-id=132 op=LOAD Dec 16 12:07:49.677132 kernel: audit: type=1300 audit(1765886869.672:424): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.672000 audit[2788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.683564 kernel: audit: type=1327 audit(1765886869.672:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.683629 kernel: audit: type=1334 audit(1765886869.673:425): prog-id=132 op=UNLOAD Dec 16 12:07:49.673000 audit: BPF prog-id=132 op=UNLOAD Dec 16 12:07:49.684622 kernel: audit: type=1300 audit(1765886869.673:425): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.673000 audit[2788]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.688752 kubelet[2713]: I1216 12:07:49.688571 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dx2c\" (UniqueName: \"kubernetes.io/projected/c3b3c7f6-0a0a-404c-927b-55e4aa40fd0f-kube-api-access-5dx2c\") pod \"tigera-operator-7dcd859c48-9m6rx\" (UID: \"c3b3c7f6-0a0a-404c-927b-55e4aa40fd0f\") " pod="tigera-operator/tigera-operator-7dcd859c48-9m6rx" Dec 16 12:07:49.688752 kubelet[2713]: I1216 12:07:49.688629 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3b3c7f6-0a0a-404c-927b-55e4aa40fd0f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9m6rx\" (UID: \"c3b3c7f6-0a0a-404c-927b-55e4aa40fd0f\") " pod="tigera-operator/tigera-operator-7dcd859c48-9m6rx" Dec 16 12:07:49.692266 kernel: audit: type=1327 audit(1765886869.673:425): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.692368 kernel: audit: type=1334 audit(1765886869.673:426): prog-id=133 op=LOAD Dec 16 12:07:49.673000 audit: BPF prog-id=133 op=LOAD Dec 16 12:07:49.673000 audit[2788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.697629 kernel: audit: type=1300 audit(1765886869.673:426): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.701856 kernel: audit: type=1327 audit(1765886869.673:426): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.674000 audit: BPF prog-id=134 op=LOAD Dec 16 12:07:49.674000 audit[2788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.678000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:07:49.678000 audit[2788]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.678000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:07:49.678000 audit[2788]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.678000 audit: BPF prog-id=135 op=LOAD Dec 16 12:07:49.678000 audit[2788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2776 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033393336383933646562633031323763356132386264623837633436 Dec 16 12:07:49.711744 containerd[1557]: time="2025-12-16T12:07:49.711708697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktcjp,Uid:805599ea-7b6d-48b9-839b-3b53bfbe9192,Namespace:kube-system,Attempt:0,} returns sandbox id \"03936893debc0127c5a28bdb87c4673829b76f3dbf3eae743f335b5d0dcd5cce\"" Dec 16 12:07:49.713086 kubelet[2713]: E1216 12:07:49.712859 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:49.719361 containerd[1557]: time="2025-12-16T12:07:49.719255129Z" level=info msg="CreateContainer within sandbox \"03936893debc0127c5a28bdb87c4673829b76f3dbf3eae743f335b5d0dcd5cce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:07:49.732039 containerd[1557]: time="2025-12-16T12:07:49.731979174Z" level=info msg="Container a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:07:49.745024 containerd[1557]: time="2025-12-16T12:07:49.744974864Z" level=info msg="CreateContainer within sandbox \"03936893debc0127c5a28bdb87c4673829b76f3dbf3eae743f335b5d0dcd5cce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4\"" Dec 16 12:07:49.745658 containerd[1557]: time="2025-12-16T12:07:49.745618385Z" level=info msg="StartContainer for \"a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4\"" Dec 16 12:07:49.748920 containerd[1557]: time="2025-12-16T12:07:49.748881962Z" level=info msg="connecting to shim a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4" address="unix:///run/containerd/s/40c90cfedec1c9915fcbbc4293e928b86cf7ad77657f283c6a2b74e1036e8e89" protocol=ttrpc version=3 Dec 16 12:07:49.773305 systemd[1]: Started cri-containerd-a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4.scope - libcontainer container a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4. Dec 16 12:07:49.823000 audit: BPF prog-id=136 op=LOAD Dec 16 12:07:49.823000 audit[2813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2776 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136653230643464356163353033646264616130643033353334633830 Dec 16 12:07:49.823000 audit: BPF prog-id=137 op=LOAD Dec 16 12:07:49.823000 audit[2813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2776 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136653230643464356163353033646264616130643033353334633830 Dec 16 12:07:49.823000 audit: BPF prog-id=137 op=UNLOAD Dec 16 12:07:49.823000 audit[2813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2776 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136653230643464356163353033646264616130643033353334633830 Dec 16 12:07:49.823000 audit: BPF prog-id=136 op=UNLOAD Dec 16 12:07:49.823000 audit[2813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2776 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136653230643464356163353033646264616130643033353334633830 Dec 16 12:07:49.823000 audit: BPF prog-id=138 op=LOAD Dec 16 12:07:49.823000 audit[2813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2776 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:49.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136653230643464356163353033646264616130643033353334633830 Dec 16 12:07:49.859253 containerd[1557]: time="2025-12-16T12:07:49.859196582Z" level=info msg="StartContainer for \"a6e20d4d5ac503dbdaa0d03534c80187f1fc3c97b15dafa2395236deca1692c4\" returns successfully" Dec 16 12:07:49.954326 containerd[1557]: time="2025-12-16T12:07:49.954217836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9m6rx,Uid:c3b3c7f6-0a0a-404c-927b-55e4aa40fd0f,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:07:49.987717 containerd[1557]: time="2025-12-16T12:07:49.987191432Z" level=info msg="connecting to shim b88899792a14224800fada10966fc097e583ebc57d6eed95ec8194f99e6c0953" address="unix:///run/containerd/s/43bc286d9962b9c2bed46a08ce2e50ec1cd6071c43f3a919bae1b10451473285" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:07:50.001000 audit[2910]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.001000 audit[2910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4d6c830 a2=0 a3=1 items=0 ppid=2826 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:07:50.001000 audit[2911]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.001000 audit[2911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff613c440 a2=0 a3=1 items=0 ppid=2826 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:07:50.002000 audit[2913]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.002000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed5703c0 a2=0 a3=1 items=0 ppid=2826 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:07:50.003000 audit[2916]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.003000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddea9fe0 a2=0 a3=1 items=0 ppid=2826 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:07:50.003000 audit[2914]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.003000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8818330 a2=0 a3=1 items=0 ppid=2826 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:07:50.006000 audit[2917]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.006000 audit[2917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc57bdc20 a2=0 a3=1 items=0 ppid=2826 pid=2917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:07:50.024268 systemd[1]: Started cri-containerd-b88899792a14224800fada10966fc097e583ebc57d6eed95ec8194f99e6c0953.scope - libcontainer container b88899792a14224800fada10966fc097e583ebc57d6eed95ec8194f99e6c0953. Dec 16 12:07:50.033000 audit: BPF prog-id=139 op=LOAD Dec 16 12:07:50.033000 audit: BPF prog-id=140 op=LOAD Dec 16 12:07:50.033000 audit[2895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.033000 audit: BPF prog-id=140 op=UNLOAD Dec 16 12:07:50.033000 audit[2895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.034000 audit: BPF prog-id=141 op=LOAD Dec 16 12:07:50.034000 audit[2895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.034000 audit: BPF prog-id=142 op=LOAD Dec 16 12:07:50.034000 audit[2895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.034000 audit: BPF prog-id=142 op=UNLOAD Dec 16 12:07:50.034000 audit[2895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.034000 audit: BPF prog-id=141 op=UNLOAD Dec 16 12:07:50.034000 audit[2895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.034000 audit: BPF prog-id=143 op=LOAD Dec 16 12:07:50.034000 audit[2895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2878 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238383839393739326131343232343830306661646131303936366663 Dec 16 12:07:50.056181 containerd[1557]: time="2025-12-16T12:07:50.056135246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9m6rx,Uid:c3b3c7f6-0a0a-404c-927b-55e4aa40fd0f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b88899792a14224800fada10966fc097e583ebc57d6eed95ec8194f99e6c0953\"" Dec 16 12:07:50.057691 containerd[1557]: time="2025-12-16T12:07:50.057596073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:07:50.104000 audit[2935]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.104000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd83a62c0 a2=0 a3=1 items=0 ppid=2826 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:07:50.108000 audit[2937]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.108000 audit[2937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe7b36d60 a2=0 a3=1 items=0 ppid=2826 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:07:50.111000 audit[2940]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.111000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe76af980 a2=0 a3=1 items=0 ppid=2826 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:07:50.111000 audit[2941]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.111000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb334920 a2=0 a3=1 items=0 ppid=2826 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:07:50.115000 audit[2943]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.115000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd0abce0 a2=0 a3=1 items=0 ppid=2826 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:07:50.115000 audit[2944]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.115000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffafcb3d0 a2=0 a3=1 items=0 ppid=2826 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:07:50.117000 audit[2946]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.117000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffec35fc00 a2=0 a3=1 items=0 ppid=2826 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:07:50.121000 audit[2949]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.121000 audit[2949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdd9750d0 a2=0 a3=1 items=0 ppid=2826 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:07:50.123000 audit[2950]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.123000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc19c1c0 a2=0 a3=1 items=0 ppid=2826 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:07:50.126000 audit[2952]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.126000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1b3b090 a2=0 a3=1 items=0 ppid=2826 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.126000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:07:50.127000 audit[2953]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.127000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4897a50 a2=0 a3=1 items=0 ppid=2826 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:07:50.128000 audit[2955]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.128000 audit[2955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffed95490 a2=0 a3=1 items=0 ppid=2826 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:07:50.132000 audit[2958]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.132000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcb2cfc0 a2=0 a3=1 items=0 ppid=2826 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:07:50.136000 audit[2961]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.136000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff7871920 a2=0 a3=1 items=0 ppid=2826 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:07:50.138000 audit[2962]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.138000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe3e4bb60 a2=0 a3=1 items=0 ppid=2826 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:07:50.139000 audit[2964]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.139000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcf714320 a2=0 a3=1 items=0 ppid=2826 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:07:50.143000 audit[2967]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.143000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff94f4270 a2=0 a3=1 items=0 ppid=2826 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:07:50.144000 audit[2968]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.144000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4416090 a2=0 a3=1 items=0 ppid=2826 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:07:50.146000 audit[2970]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:07:50.146000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffed3a0640 a2=0 a3=1 items=0 ppid=2826 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:07:50.165000 audit[2976]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:50.165000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc4049650 a2=0 a3=1 items=0 ppid=2826 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:50.174000 audit[2976]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:50.174000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc4049650 a2=0 a3=1 items=0 ppid=2826 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:50.176000 audit[2981]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.176000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdf1588a0 a2=0 a3=1 items=0 ppid=2826 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:07:50.179000 audit[2983]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.179000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffb2af490 a2=0 a3=1 items=0 ppid=2826 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:07:50.182000 audit[2986]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.182000 audit[2986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd777c2a0 a2=0 a3=1 items=0 ppid=2826 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:07:50.184000 audit[2987]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.184000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe94b1500 a2=0 a3=1 items=0 ppid=2826 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:07:50.186000 audit[2989]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.186000 audit[2989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffccc3bff0 a2=0 a3=1 items=0 ppid=2826 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:07:50.187000 audit[2990]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.187000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffd210d0 a2=0 a3=1 items=0 ppid=2826 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:07:50.190000 audit[2992]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.190000 audit[2992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe8b18a80 a2=0 a3=1 items=0 ppid=2826 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:07:50.193000 audit[2995]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.193000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd7d8e580 a2=0 a3=1 items=0 ppid=2826 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:07:50.194000 audit[2996]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.194000 audit[2996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe133a3d0 a2=0 a3=1 items=0 ppid=2826 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:07:50.197000 audit[2998]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.197000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdb29db90 a2=0 a3=1 items=0 ppid=2826 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:07:50.198000 audit[2999]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.198000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa7cf900 a2=0 a3=1 items=0 ppid=2826 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:07:50.201000 audit[3001]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.201000 audit[3001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf8ddc90 a2=0 a3=1 items=0 ppid=2826 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:07:50.204000 audit[3004]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.204000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd3efff0 a2=0 a3=1 items=0 ppid=2826 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:07:50.209000 audit[3007]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.209000 audit[3007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffab04df0 a2=0 a3=1 items=0 ppid=2826 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:07:50.209000 audit[3008]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.209000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffda9c0e90 a2=0 a3=1 items=0 ppid=2826 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:07:50.212000 audit[3010]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.212000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc921d770 a2=0 a3=1 items=0 ppid=2826 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:07:50.216000 audit[3013]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.216000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce55a9c0 a2=0 a3=1 items=0 ppid=2826 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:07:50.217000 audit[3014]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.217000 audit[3014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3ad3d30 a2=0 a3=1 items=0 ppid=2826 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.217000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:07:50.220000 audit[3016]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.220000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffcd3cca0 a2=0 a3=1 items=0 ppid=2826 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:07:50.221000 audit[3017]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.221000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0090460 a2=0 a3=1 items=0 ppid=2826 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:07:50.223000 audit[3019]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.223000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff9d57670 a2=0 a3=1 items=0 ppid=2826 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:07:50.227000 audit[3022]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:07:50.227000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe0442050 a2=0 a3=1 items=0 ppid=2826 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:07:50.230000 audit[3024]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:07:50.230000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffffd148a30 a2=0 a3=1 items=0 ppid=2826 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.230000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:50.231000 audit[3024]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:07:50.231000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffffd148a30 a2=0 a3=1 items=0 ppid=2826 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:50.231000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:50.586229 kubelet[2713]: E1216 12:07:50.585997 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:50.596466 kubelet[2713]: I1216 12:07:50.596398 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ktcjp" podStartSLOduration=1.596382056 podStartE2EDuration="1.596382056s" podCreationTimestamp="2025-12-16 12:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:07:50.596229492 +0000 UTC m=+7.212630903" watchObservedRunningTime="2025-12-16 12:07:50.596382056 +0000 UTC m=+7.212783467" Dec 16 12:07:51.649992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942896735.mount: Deactivated successfully. Dec 16 12:07:51.967628 containerd[1557]: time="2025-12-16T12:07:51.967481623Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:51.968553 containerd[1557]: time="2025-12-16T12:07:51.968509345Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:07:51.969641 containerd[1557]: time="2025-12-16T12:07:51.969610447Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:51.971445 containerd[1557]: time="2025-12-16T12:07:51.971397776Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:07:51.972138 containerd[1557]: time="2025-12-16T12:07:51.972106810Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.914473607s" Dec 16 12:07:51.972196 containerd[1557]: time="2025-12-16T12:07:51.972144301Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:07:51.981641 containerd[1557]: time="2025-12-16T12:07:51.981602211Z" level=info msg="CreateContainer within sandbox \"b88899792a14224800fada10966fc097e583ebc57d6eed95ec8194f99e6c0953\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:07:51.987862 containerd[1557]: time="2025-12-16T12:07:51.987335182Z" level=info msg="Container 0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:07:51.991037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597430267.mount: Deactivated successfully. Dec 16 12:07:51.994288 containerd[1557]: time="2025-12-16T12:07:51.994254037Z" level=info msg="CreateContainer within sandbox \"b88899792a14224800fada10966fc097e583ebc57d6eed95ec8194f99e6c0953\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab\"" Dec 16 12:07:51.994833 containerd[1557]: time="2025-12-16T12:07:51.994808629Z" level=info msg="StartContainer for \"0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab\"" Dec 16 12:07:51.995843 containerd[1557]: time="2025-12-16T12:07:51.995803822Z" level=info msg="connecting to shim 0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab" address="unix:///run/containerd/s/43bc286d9962b9c2bed46a08ce2e50ec1cd6071c43f3a919bae1b10451473285" protocol=ttrpc version=3 Dec 16 12:07:52.040237 systemd[1]: Started cri-containerd-0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab.scope - libcontainer container 0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab. Dec 16 12:07:52.050000 audit: BPF prog-id=144 op=LOAD Dec 16 12:07:52.051000 audit: BPF prog-id=145 op=LOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.051000 audit: BPF prog-id=145 op=UNLOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.051000 audit: BPF prog-id=146 op=LOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.051000 audit: BPF prog-id=147 op=LOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.051000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.051000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.051000 audit: BPF prog-id=148 op=LOAD Dec 16 12:07:52.051000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2878 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316239663763306666383438316534623935633535346332343366 Dec 16 12:07:52.074141 containerd[1557]: time="2025-12-16T12:07:52.074065529Z" level=info msg="StartContainer for \"0b1b9f7c0ff8481e4b95c554c243f79118892361f0d9314286dbce8755f8c5ab\" returns successfully" Dec 16 12:07:52.601175 kubelet[2713]: I1216 12:07:52.601112 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9m6rx" podStartSLOduration=1.683587273 podStartE2EDuration="3.601094716s" podCreationTimestamp="2025-12-16 12:07:49 +0000 UTC" firstStartedPulling="2025-12-16 12:07:50.057314911 +0000 UTC m=+6.673716322" lastFinishedPulling="2025-12-16 12:07:51.974822354 +0000 UTC m=+8.591223765" observedRunningTime="2025-12-16 12:07:52.600883502 +0000 UTC m=+9.217284953" watchObservedRunningTime="2025-12-16 12:07:52.601094716 +0000 UTC m=+9.217496127" Dec 16 12:07:53.045722 kubelet[2713]: E1216 12:07:53.045621 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:53.595329 kubelet[2713]: E1216 12:07:53.595284 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:54.715678 kubelet[2713]: E1216 12:07:54.715634 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:57.574288 kubelet[2713]: E1216 12:07:57.573898 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:57.601395 kubelet[2713]: E1216 12:07:57.601368 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:07:57.749222 sudo[1776]: pam_unix(sudo:session): session closed for user root Dec 16 12:07:57.748000 audit[1776]: USER_END pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:57.750524 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:07:57.750575 kernel: audit: type=1106 audit(1765886877.748:503): pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:57.749000 audit[1776]: CRED_DISP pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:57.756812 kernel: audit: type=1104 audit(1765886877.749:504): pid=1776 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:07:57.757357 sshd[1775]: Connection closed by 10.0.0.1 port 58698 Dec 16 12:07:57.757881 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Dec 16 12:07:57.759000 audit[1771]: USER_END pid=1771 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:57.763721 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:07:57.764003 systemd[1]: session-8.scope: Consumed 6.778s CPU time, 207.8M memory peak. Dec 16 12:07:57.759000 audit[1771]: CRED_DISP pid=1771 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:57.766231 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:07:57.766303 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:58698.service: Deactivated successfully. Dec 16 12:07:57.767898 kernel: audit: type=1106 audit(1765886877.759:505): pid=1771 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:57.768033 kernel: audit: type=1104 audit(1765886877.759:506): pid=1771 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:07:57.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:58698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:57.769607 systemd-logind[1531]: Removed session 8. Dec 16 12:07:57.772046 kernel: audit: type=1131 audit(1765886877.765:507): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:58698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:07:59.196000 audit[3122]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:59.196000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd90a7be0 a2=0 a3=1 items=0 ppid=2826 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:59.205513 kernel: audit: type=1325 audit(1765886879.196:508): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:59.205594 kernel: audit: type=1300 audit(1765886879.196:508): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd90a7be0 a2=0 a3=1 items=0 ppid=2826 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:59.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:59.210159 kernel: audit: type=1327 audit(1765886879.196:508): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:59.210219 kernel: audit: type=1325 audit(1765886879.200:509): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:59.200000 audit[3122]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:59.200000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd90a7be0 a2=0 a3=1 items=0 ppid=2826 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:59.217549 kernel: audit: type=1300 audit(1765886879.200:509): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd90a7be0 a2=0 a3=1 items=0 ppid=2826 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:59.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:59.218000 audit[3124]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:59.218000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffdbe3360 a2=0 a3=1 items=0 ppid=2826 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:59.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:59.222000 audit[3124]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:07:59.222000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffdbe3360 a2=0 a3=1 items=0 ppid=2826 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:07:59.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:07:59.493191 update_engine[1534]: I20251216 12:07:59.493042 1534 update_attempter.cc:509] Updating boot flags... Dec 16 12:08:02.253000 audit[3144]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:02.253000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffca423190 a2=0 a3=1 items=0 ppid=2826 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:02.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:02.264000 audit[3144]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:02.264000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca423190 a2=0 a3=1 items=0 ppid=2826 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:02.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:02.281000 audit[3146]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:02.281000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd039e300 a2=0 a3=1 items=0 ppid=2826 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:02.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:02.290000 audit[3146]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:02.290000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd039e300 a2=0 a3=1 items=0 ppid=2826 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:02.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:03.305000 audit[3148]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:03.309904 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:08:03.309962 kernel: audit: type=1325 audit(1765886883.305:516): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:03.305000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffa00efe0 a2=0 a3=1 items=0 ppid=2826 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:03.314585 kernel: audit: type=1300 audit(1765886883.305:516): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffa00efe0 a2=0 a3=1 items=0 ppid=2826 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:03.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:03.317536 kernel: audit: type=1327 audit(1765886883.305:516): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:03.314000 audit[3148]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:03.320150 kernel: audit: type=1325 audit(1765886883.314:517): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:03.314000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa00efe0 a2=0 a3=1 items=0 ppid=2826 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:03.325297 kernel: audit: type=1300 audit(1765886883.314:517): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa00efe0 a2=0 a3=1 items=0 ppid=2826 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:03.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:03.329395 kernel: audit: type=1327 audit(1765886883.314:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:05.480000 audit[3151]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:05.480000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd409bd90 a2=0 a3=1 items=0 ppid=2826 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.490368 kernel: audit: type=1325 audit(1765886885.480:518): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:05.490514 kernel: audit: type=1300 audit(1765886885.480:518): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd409bd90 a2=0 a3=1 items=0 ppid=2826 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:05.493690 kernel: audit: type=1327 audit(1765886885.480:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:05.494000 audit[3151]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:05.494000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd409bd90 a2=0 a3=1 items=0 ppid=2826 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.500066 kernel: audit: type=1325 audit(1765886885.494:519): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:05.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:05.510000 audit[3153]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:05.510000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffb8d1440 a2=0 a3=1 items=0 ppid=2826 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:05.518000 audit[3153]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:05.518000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb8d1440 a2=0 a3=1 items=0 ppid=2826 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:05.532904 systemd[1]: Created slice kubepods-besteffort-pod44f4ae1e_8b41_4f1e_abb7_7ca402270938.slice - libcontainer container kubepods-besteffort-pod44f4ae1e_8b41_4f1e_abb7_7ca402270938.slice. Dec 16 12:08:05.588323 kubelet[2713]: I1216 12:08:05.588279 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44f4ae1e-8b41-4f1e-abb7-7ca402270938-tigera-ca-bundle\") pod \"calico-typha-78f4dbfb7-s4wjq\" (UID: \"44f4ae1e-8b41-4f1e-abb7-7ca402270938\") " pod="calico-system/calico-typha-78f4dbfb7-s4wjq" Dec 16 12:08:05.588323 kubelet[2713]: I1216 12:08:05.588322 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/44f4ae1e-8b41-4f1e-abb7-7ca402270938-typha-certs\") pod \"calico-typha-78f4dbfb7-s4wjq\" (UID: \"44f4ae1e-8b41-4f1e-abb7-7ca402270938\") " pod="calico-system/calico-typha-78f4dbfb7-s4wjq" Dec 16 12:08:05.588699 kubelet[2713]: I1216 12:08:05.588352 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmw5\" (UniqueName: \"kubernetes.io/projected/44f4ae1e-8b41-4f1e-abb7-7ca402270938-kube-api-access-bjmw5\") pod \"calico-typha-78f4dbfb7-s4wjq\" (UID: \"44f4ae1e-8b41-4f1e-abb7-7ca402270938\") " pod="calico-system/calico-typha-78f4dbfb7-s4wjq" Dec 16 12:08:05.712630 systemd[1]: Created slice kubepods-besteffort-pod5ce76a86_e18b_4ff2_9202_3d5930f121e8.slice - libcontainer container kubepods-besteffort-pod5ce76a86_e18b_4ff2_9202_3d5930f121e8.slice. Dec 16 12:08:05.790471 kubelet[2713]: I1216 12:08:05.790326 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce76a86-e18b-4ff2-9202-3d5930f121e8-tigera-ca-bundle\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790471 kubelet[2713]: I1216 12:08:05.790386 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-var-run-calico\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790471 kubelet[2713]: I1216 12:08:05.790408 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-cni-bin-dir\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790471 kubelet[2713]: I1216 12:08:05.790424 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-cni-log-dir\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790471 kubelet[2713]: I1216 12:08:05.790441 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-cni-net-dir\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790671 kubelet[2713]: I1216 12:08:05.790464 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-xtables-lock\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790671 kubelet[2713]: I1216 12:08:05.790480 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ce76a86-e18b-4ff2-9202-3d5930f121e8-node-certs\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790671 kubelet[2713]: I1216 12:08:05.790499 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872pp\" (UniqueName: \"kubernetes.io/projected/5ce76a86-e18b-4ff2-9202-3d5930f121e8-kube-api-access-872pp\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790671 kubelet[2713]: I1216 12:08:05.790518 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-flexvol-driver-host\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790671 kubelet[2713]: I1216 12:08:05.790534 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-lib-modules\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790779 kubelet[2713]: I1216 12:08:05.790547 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-policysync\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.790779 kubelet[2713]: I1216 12:08:05.790564 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ce76a86-e18b-4ff2-9202-3d5930f121e8-var-lib-calico\") pod \"calico-node-k6klz\" (UID: \"5ce76a86-e18b-4ff2-9202-3d5930f121e8\") " pod="calico-system/calico-node-k6klz" Dec 16 12:08:05.839328 kubelet[2713]: E1216 12:08:05.839292 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:05.839950 containerd[1557]: time="2025-12-16T12:08:05.839902760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f4dbfb7-s4wjq,Uid:44f4ae1e-8b41-4f1e-abb7-7ca402270938,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:05.897863 kubelet[2713]: E1216 12:08:05.897826 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:05.897863 kubelet[2713]: W1216 12:08:05.897858 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:05.899346 kubelet[2713]: E1216 12:08:05.899291 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:05.900065 kubelet[2713]: E1216 12:08:05.899935 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:05.900065 kubelet[2713]: W1216 12:08:05.899957 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:05.900065 kubelet[2713]: E1216 12:08:05.899977 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:05.901609 containerd[1557]: time="2025-12-16T12:08:05.901520799Z" level=info msg="connecting to shim ce2d3ba542aff10d98198b1dd1342d19095f95c3007ede3742f9ab5117d8a97a" address="unix:///run/containerd/s/890aa08333c331bca62002df6839619a6aeb4752dfebb2cdcaf212a848bd9b0e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:05.909879 kubelet[2713]: E1216 12:08:05.909835 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:05.909879 kubelet[2713]: W1216 12:08:05.909865 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:05.909879 kubelet[2713]: E1216 12:08:05.909885 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:05.938294 systemd[1]: Started cri-containerd-ce2d3ba542aff10d98198b1dd1342d19095f95c3007ede3742f9ab5117d8a97a.scope - libcontainer container ce2d3ba542aff10d98198b1dd1342d19095f95c3007ede3742f9ab5117d8a97a. Dec 16 12:08:05.948000 audit: BPF prog-id=149 op=LOAD Dec 16 12:08:05.949000 audit: BPF prog-id=150 op=LOAD Dec 16 12:08:05.949000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.949000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:08:05.949000 audit[3183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.950000 audit: BPF prog-id=151 op=LOAD Dec 16 12:08:05.950000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.950000 audit: BPF prog-id=152 op=LOAD Dec 16 12:08:05.950000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.950000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:08:05.950000 audit[3183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.950000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:08:05.950000 audit[3183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.950000 audit: BPF prog-id=153 op=LOAD Dec 16 12:08:05.950000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3168 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:05.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365326433626135343261666631306439383139386231646431333432 Dec 16 12:08:05.987830 kubelet[2713]: E1216 12:08:05.987772 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:05.994948 containerd[1557]: time="2025-12-16T12:08:05.994891082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f4dbfb7-s4wjq,Uid:44f4ae1e-8b41-4f1e-abb7-7ca402270938,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce2d3ba542aff10d98198b1dd1342d19095f95c3007ede3742f9ab5117d8a97a\"" Dec 16 12:08:05.998419 kubelet[2713]: E1216 12:08:05.998356 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:05.999942 containerd[1557]: time="2025-12-16T12:08:05.999877916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:08:06.017541 kubelet[2713]: E1216 12:08:06.017488 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:06.018057 containerd[1557]: time="2025-12-16T12:08:06.017985448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k6klz,Uid:5ce76a86-e18b-4ff2-9202-3d5930f121e8,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:06.038752 containerd[1557]: time="2025-12-16T12:08:06.038594233Z" level=info msg="connecting to shim 459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e" address="unix:///run/containerd/s/c4a7b51421c97d5933e77bdeb532595357f15b70ffa81b4284e3e6ffc2f4938f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:06.070845 kubelet[2713]: E1216 12:08:06.070809 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.070845 kubelet[2713]: W1216 12:08:06.070839 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.071043 kubelet[2713]: E1216 12:08:06.070863 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.071088 kubelet[2713]: E1216 12:08:06.071071 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.071138 kubelet[2713]: W1216 12:08:06.071085 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.071176 kubelet[2713]: E1216 12:08:06.071139 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.071324 kubelet[2713]: E1216 12:08:06.071311 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.071324 kubelet[2713]: W1216 12:08:06.071321 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.071395 kubelet[2713]: E1216 12:08:06.071330 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.071606 kubelet[2713]: E1216 12:08:06.071539 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.071606 kubelet[2713]: W1216 12:08:06.071551 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.071606 kubelet[2713]: E1216 12:08:06.071561 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.072537 kubelet[2713]: E1216 12:08:06.072460 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.072537 kubelet[2713]: W1216 12:08:06.072484 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.072537 kubelet[2713]: E1216 12:08:06.072499 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.073726 kubelet[2713]: E1216 12:08:06.073697 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.073726 kubelet[2713]: W1216 12:08:06.073716 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.073726 kubelet[2713]: E1216 12:08:06.073729 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.074497 systemd[1]: Started cri-containerd-459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e.scope - libcontainer container 459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e. Dec 16 12:08:06.075422 kubelet[2713]: E1216 12:08:06.074798 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.075422 kubelet[2713]: W1216 12:08:06.074813 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.075422 kubelet[2713]: E1216 12:08:06.075208 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.078377 kubelet[2713]: E1216 12:08:06.078341 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.078377 kubelet[2713]: W1216 12:08:06.078368 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.078506 kubelet[2713]: E1216 12:08:06.078398 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.079692 kubelet[2713]: E1216 12:08:06.079654 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.079772 kubelet[2713]: W1216 12:08:06.079720 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.079772 kubelet[2713]: E1216 12:08:06.079743 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.079969 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.081053 kubelet[2713]: W1216 12:08:06.079985 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.079995 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.080231 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.081053 kubelet[2713]: W1216 12:08:06.080240 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.080249 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.080416 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.081053 kubelet[2713]: W1216 12:08:06.080425 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.080434 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.081053 kubelet[2713]: E1216 12:08:06.080606 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.081359 kubelet[2713]: W1216 12:08:06.080615 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.081359 kubelet[2713]: E1216 12:08:06.080629 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.081359 kubelet[2713]: E1216 12:08:06.081119 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.081359 kubelet[2713]: W1216 12:08:06.081134 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.081359 kubelet[2713]: E1216 12:08:06.081145 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.081565 kubelet[2713]: E1216 12:08:06.081542 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.081565 kubelet[2713]: W1216 12:08:06.081560 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.081615 kubelet[2713]: E1216 12:08:06.081572 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.083038 kubelet[2713]: E1216 12:08:06.082468 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.083038 kubelet[2713]: W1216 12:08:06.082486 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.083038 kubelet[2713]: E1216 12:08:06.082499 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.083507 kubelet[2713]: E1216 12:08:06.083486 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.083507 kubelet[2713]: W1216 12:08:06.083501 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.083586 kubelet[2713]: E1216 12:08:06.083514 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.084356 kubelet[2713]: E1216 12:08:06.084040 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.084356 kubelet[2713]: W1216 12:08:06.084056 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.084356 kubelet[2713]: E1216 12:08:06.084072 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.084356 kubelet[2713]: E1216 12:08:06.084256 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.084356 kubelet[2713]: W1216 12:08:06.084264 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.084356 kubelet[2713]: E1216 12:08:06.084272 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.084540 kubelet[2713]: E1216 12:08:06.084460 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.084540 kubelet[2713]: W1216 12:08:06.084469 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.084540 kubelet[2713]: E1216 12:08:06.084484 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.093226 kubelet[2713]: E1216 12:08:06.093195 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.093226 kubelet[2713]: W1216 12:08:06.093220 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.093337 kubelet[2713]: E1216 12:08:06.093240 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.093337 kubelet[2713]: I1216 12:08:06.093263 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8b37b908-ae1e-416d-92b4-0b4c5064435f-varrun\") pod \"csi-node-driver-fx4jj\" (UID: \"8b37b908-ae1e-416d-92b4-0b4c5064435f\") " pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:06.093000 audit: BPF prog-id=154 op=LOAD Dec 16 12:08:06.094000 audit: BPF prog-id=155 op=LOAD Dec 16 12:08:06.094000 audit[3237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.094000 audit: BPF prog-id=155 op=UNLOAD Dec 16 12:08:06.094000 audit[3237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.094701 kubelet[2713]: E1216 12:08:06.094161 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.094701 kubelet[2713]: W1216 12:08:06.094182 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.094701 kubelet[2713]: E1216 12:08:06.094197 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.094701 kubelet[2713]: I1216 12:08:06.094218 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b37b908-ae1e-416d-92b4-0b4c5064435f-kubelet-dir\") pod \"csi-node-driver-fx4jj\" (UID: \"8b37b908-ae1e-416d-92b4-0b4c5064435f\") " pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:06.094000 audit: BPF prog-id=156 op=LOAD Dec 16 12:08:06.094000 audit[3237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.095000 audit: BPF prog-id=157 op=LOAD Dec 16 12:08:06.095000 audit[3237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.095000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:08:06.095000 audit[3237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.095000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:08:06.095000 audit[3237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.095000 audit: BPF prog-id=158 op=LOAD Dec 16 12:08:06.095000 audit[3237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3227 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396632393538343736333263383532323839386562393432393837 Dec 16 12:08:06.097128 kubelet[2713]: E1216 12:08:06.097100 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.097230 kubelet[2713]: W1216 12:08:06.097214 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.097313 kubelet[2713]: E1216 12:08:06.097299 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.097454 kubelet[2713]: I1216 12:08:06.097433 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8b37b908-ae1e-416d-92b4-0b4c5064435f-socket-dir\") pod \"csi-node-driver-fx4jj\" (UID: \"8b37b908-ae1e-416d-92b4-0b4c5064435f\") " pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:06.098078 kubelet[2713]: E1216 12:08:06.098053 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.098078 kubelet[2713]: W1216 12:08:06.098072 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.098078 kubelet[2713]: E1216 12:08:06.098084 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.098308 kubelet[2713]: E1216 12:08:06.098285 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.098308 kubelet[2713]: W1216 12:08:06.098297 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.098308 kubelet[2713]: E1216 12:08:06.098310 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.100215 kubelet[2713]: E1216 12:08:06.100115 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.100215 kubelet[2713]: W1216 12:08:06.100139 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.100215 kubelet[2713]: E1216 12:08:06.100157 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.101125 kubelet[2713]: E1216 12:08:06.101095 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.101125 kubelet[2713]: W1216 12:08:06.101118 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.101244 kubelet[2713]: E1216 12:08:06.101135 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.101436 kubelet[2713]: E1216 12:08:06.101379 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.101436 kubelet[2713]: W1216 12:08:06.101402 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.101436 kubelet[2713]: E1216 12:08:06.101414 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.101793 kubelet[2713]: I1216 12:08:06.101471 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26t9t\" (UniqueName: \"kubernetes.io/projected/8b37b908-ae1e-416d-92b4-0b4c5064435f-kube-api-access-26t9t\") pod \"csi-node-driver-fx4jj\" (UID: \"8b37b908-ae1e-416d-92b4-0b4c5064435f\") " pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:06.102466 kubelet[2713]: E1216 12:08:06.102441 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.102466 kubelet[2713]: W1216 12:08:06.102462 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.102576 kubelet[2713]: E1216 12:08:06.102483 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.103129 kubelet[2713]: E1216 12:08:06.103107 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.103129 kubelet[2713]: W1216 12:08:06.103124 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.103222 kubelet[2713]: E1216 12:08:06.103136 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.103947 kubelet[2713]: E1216 12:08:06.103835 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.103947 kubelet[2713]: W1216 12:08:06.103852 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.103947 kubelet[2713]: E1216 12:08:06.103864 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.105146 kubelet[2713]: E1216 12:08:06.105113 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.105146 kubelet[2713]: W1216 12:08:06.105139 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.105146 kubelet[2713]: E1216 12:08:06.105157 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.105751 kubelet[2713]: E1216 12:08:06.105729 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.105751 kubelet[2713]: W1216 12:08:06.105748 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.105826 kubelet[2713]: E1216 12:08:06.105761 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.105826 kubelet[2713]: I1216 12:08:06.105792 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8b37b908-ae1e-416d-92b4-0b4c5064435f-registration-dir\") pod \"csi-node-driver-fx4jj\" (UID: \"8b37b908-ae1e-416d-92b4-0b4c5064435f\") " pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:06.107137 kubelet[2713]: E1216 12:08:06.107092 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.107137 kubelet[2713]: W1216 12:08:06.107113 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.107137 kubelet[2713]: E1216 12:08:06.107127 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.107447 kubelet[2713]: E1216 12:08:06.107431 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.107447 kubelet[2713]: W1216 12:08:06.107445 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.107514 kubelet[2713]: E1216 12:08:06.107456 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.145718 containerd[1557]: time="2025-12-16T12:08:06.145678296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k6klz,Uid:5ce76a86-e18b-4ff2-9202-3d5930f121e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\"" Dec 16 12:08:06.146720 kubelet[2713]: E1216 12:08:06.146694 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:06.206916 kubelet[2713]: E1216 12:08:06.206872 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.206916 kubelet[2713]: W1216 12:08:06.206899 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.206916 kubelet[2713]: E1216 12:08:06.206921 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.207182 kubelet[2713]: E1216 12:08:06.207146 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.207182 kubelet[2713]: W1216 12:08:06.207155 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.207182 kubelet[2713]: E1216 12:08:06.207164 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.207336 kubelet[2713]: E1216 12:08:06.207323 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.207336 kubelet[2713]: W1216 12:08:06.207335 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.207410 kubelet[2713]: E1216 12:08:06.207344 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.207497 kubelet[2713]: E1216 12:08:06.207485 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.207497 kubelet[2713]: W1216 12:08:06.207494 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.207550 kubelet[2713]: E1216 12:08:06.207502 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.207704 kubelet[2713]: E1216 12:08:06.207693 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.207704 kubelet[2713]: W1216 12:08:06.207704 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.207763 kubelet[2713]: E1216 12:08:06.207712 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.207901 kubelet[2713]: E1216 12:08:06.207890 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.207933 kubelet[2713]: W1216 12:08:06.207901 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.207933 kubelet[2713]: E1216 12:08:06.207910 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.208120 kubelet[2713]: E1216 12:08:06.208105 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.208120 kubelet[2713]: W1216 12:08:06.208119 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.208193 kubelet[2713]: E1216 12:08:06.208127 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.208295 kubelet[2713]: E1216 12:08:06.208283 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.208295 kubelet[2713]: W1216 12:08:06.208293 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.208349 kubelet[2713]: E1216 12:08:06.208301 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.208540 kubelet[2713]: E1216 12:08:06.208521 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.208574 kubelet[2713]: W1216 12:08:06.208541 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.208574 kubelet[2713]: E1216 12:08:06.208555 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.208704 kubelet[2713]: E1216 12:08:06.208694 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.208704 kubelet[2713]: W1216 12:08:06.208704 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.208757 kubelet[2713]: E1216 12:08:06.208712 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.208845 kubelet[2713]: E1216 12:08:06.208833 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.208845 kubelet[2713]: W1216 12:08:06.208842 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.209094 kubelet[2713]: E1216 12:08:06.208850 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.209218 kubelet[2713]: E1216 12:08:06.209201 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.209291 kubelet[2713]: W1216 12:08:06.209278 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.209350 kubelet[2713]: E1216 12:08:06.209332 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.209599 kubelet[2713]: E1216 12:08:06.209585 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.209772 kubelet[2713]: W1216 12:08:06.209622 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.209772 kubelet[2713]: E1216 12:08:06.209635 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.209897 kubelet[2713]: E1216 12:08:06.209883 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.209968 kubelet[2713]: W1216 12:08:06.209957 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.210094 kubelet[2713]: E1216 12:08:06.210009 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.210414 kubelet[2713]: E1216 12:08:06.210368 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.210414 kubelet[2713]: W1216 12:08:06.210382 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.210585 kubelet[2713]: E1216 12:08:06.210509 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.210798 kubelet[2713]: E1216 12:08:06.210778 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.210871 kubelet[2713]: W1216 12:08:06.210851 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.210937 kubelet[2713]: E1216 12:08:06.210925 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.211280 kubelet[2713]: E1216 12:08:06.211155 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.211280 kubelet[2713]: W1216 12:08:06.211168 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.211280 kubelet[2713]: E1216 12:08:06.211179 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.211495 kubelet[2713]: E1216 12:08:06.211479 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.211592 kubelet[2713]: W1216 12:08:06.211575 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.211663 kubelet[2713]: E1216 12:08:06.211652 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.211961 kubelet[2713]: E1216 12:08:06.211909 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.211961 kubelet[2713]: W1216 12:08:06.211922 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.211961 kubelet[2713]: E1216 12:08:06.211933 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.212318 kubelet[2713]: E1216 12:08:06.212256 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.212318 kubelet[2713]: W1216 12:08:06.212271 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.212318 kubelet[2713]: E1216 12:08:06.212284 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.212705 kubelet[2713]: E1216 12:08:06.212689 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.212796 kubelet[2713]: W1216 12:08:06.212783 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.212953 kubelet[2713]: E1216 12:08:06.212854 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.213149 kubelet[2713]: E1216 12:08:06.213136 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.213241 kubelet[2713]: W1216 12:08:06.213225 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.213293 kubelet[2713]: E1216 12:08:06.213283 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.214619 kubelet[2713]: E1216 12:08:06.214593 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.214619 kubelet[2713]: W1216 12:08:06.214612 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.214698 kubelet[2713]: E1216 12:08:06.214628 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.214997 kubelet[2713]: E1216 12:08:06.214978 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.214997 kubelet[2713]: W1216 12:08:06.214990 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.215091 kubelet[2713]: E1216 12:08:06.215001 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.215259 kubelet[2713]: E1216 12:08:06.215244 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.215259 kubelet[2713]: W1216 12:08:06.215257 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.215310 kubelet[2713]: E1216 12:08:06.215267 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.226934 kubelet[2713]: E1216 12:08:06.226894 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:06.226934 kubelet[2713]: W1216 12:08:06.226916 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:06.226934 kubelet[2713]: E1216 12:08:06.226935 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:06.533000 audit[3327]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:06.533000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcad80130 a2=0 a3=1 items=0 ppid=2826 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:06.540000 audit[3327]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:06.540000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcad80130 a2=0 a3=1 items=0 ppid=2826 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:06.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:06.835228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787004449.mount: Deactivated successfully. Dec 16 12:08:07.377920 containerd[1557]: time="2025-12-16T12:08:07.377859864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:07.382059 containerd[1557]: time="2025-12-16T12:08:07.381986146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 12:08:07.385961 containerd[1557]: time="2025-12-16T12:08:07.385871285Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:07.390059 containerd[1557]: time="2025-12-16T12:08:07.389796788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:07.391128 containerd[1557]: time="2025-12-16T12:08:07.391087354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.391143151s" Dec 16 12:08:07.391128 containerd[1557]: time="2025-12-16T12:08:07.391129518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:08:07.394131 containerd[1557]: time="2025-12-16T12:08:07.394075406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:08:07.424192 containerd[1557]: time="2025-12-16T12:08:07.424149379Z" level=info msg="CreateContainer within sandbox \"ce2d3ba542aff10d98198b1dd1342d19095f95c3007ede3742f9ab5117d8a97a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:08:07.430464 containerd[1557]: time="2025-12-16T12:08:07.430413631Z" level=info msg="Container 1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:08:07.436775 containerd[1557]: time="2025-12-16T12:08:07.436713885Z" level=info msg="CreateContainer within sandbox \"ce2d3ba542aff10d98198b1dd1342d19095f95c3007ede3742f9ab5117d8a97a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2\"" Dec 16 12:08:07.439094 containerd[1557]: time="2025-12-16T12:08:07.439055794Z" level=info msg="StartContainer for \"1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2\"" Dec 16 12:08:07.440478 containerd[1557]: time="2025-12-16T12:08:07.440308876Z" level=info msg="connecting to shim 1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2" address="unix:///run/containerd/s/890aa08333c331bca62002df6839619a6aeb4752dfebb2cdcaf212a848bd9b0e" protocol=ttrpc version=3 Dec 16 12:08:07.466316 systemd[1]: Started cri-containerd-1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2.scope - libcontainer container 1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2. Dec 16 12:08:07.481000 audit: BPF prog-id=159 op=LOAD Dec 16 12:08:07.481000 audit: BPF prog-id=160 op=LOAD Dec 16 12:08:07.481000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.482000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:08:07.482000 audit[3338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.482000 audit: BPF prog-id=161 op=LOAD Dec 16 12:08:07.482000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.482000 audit: BPF prog-id=162 op=LOAD Dec 16 12:08:07.482000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.482000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:08:07.482000 audit[3338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.482000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:08:07.482000 audit[3338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.482000 audit: BPF prog-id=163 op=LOAD Dec 16 12:08:07.482000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3168 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:07.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303335383437623164643135323338303532336163613763313666 Dec 16 12:08:07.521582 containerd[1557]: time="2025-12-16T12:08:07.521523479Z" level=info msg="StartContainer for \"1e035847b1dd152380523aca7c16f9a66bcbe0401ff168e0cca6b4efdd9d90c2\" returns successfully" Dec 16 12:08:07.553749 kubelet[2713]: E1216 12:08:07.552599 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:07.643993 kubelet[2713]: E1216 12:08:07.643884 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:07.671076 kubelet[2713]: I1216 12:08:07.670699 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78f4dbfb7-s4wjq" podStartSLOduration=1.276522981 podStartE2EDuration="2.67068067s" podCreationTimestamp="2025-12-16 12:08:05 +0000 UTC" firstStartedPulling="2025-12-16 12:08:05.999615687 +0000 UTC m=+22.616017058" lastFinishedPulling="2025-12-16 12:08:07.393773336 +0000 UTC m=+24.010174747" observedRunningTime="2025-12-16 12:08:07.670343397 +0000 UTC m=+24.286744808" watchObservedRunningTime="2025-12-16 12:08:07.67068067 +0000 UTC m=+24.287082081" Dec 16 12:08:07.700190 kubelet[2713]: E1216 12:08:07.700132 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.700190 kubelet[2713]: W1216 12:08:07.700172 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.700190 kubelet[2713]: E1216 12:08:07.700200 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.701508 kubelet[2713]: E1216 12:08:07.701466 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.701640 kubelet[2713]: W1216 12:08:07.701490 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.701640 kubelet[2713]: E1216 12:08:07.701556 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.701769 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703043 kubelet[2713]: W1216 12:08:07.701781 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.701791 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.702055 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703043 kubelet[2713]: W1216 12:08:07.702065 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.702074 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.702243 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703043 kubelet[2713]: W1216 12:08:07.702251 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.702259 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703043 kubelet[2713]: E1216 12:08:07.702394 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703337 kubelet[2713]: W1216 12:08:07.702401 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703337 kubelet[2713]: E1216 12:08:07.702409 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703337 kubelet[2713]: E1216 12:08:07.702539 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703337 kubelet[2713]: W1216 12:08:07.702556 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703337 kubelet[2713]: E1216 12:08:07.702567 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703337 kubelet[2713]: E1216 12:08:07.702719 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703337 kubelet[2713]: W1216 12:08:07.702728 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703337 kubelet[2713]: E1216 12:08:07.702737 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703337 kubelet[2713]: E1216 12:08:07.702874 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703337 kubelet[2713]: W1216 12:08:07.702881 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703538 kubelet[2713]: E1216 12:08:07.702889 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703538 kubelet[2713]: E1216 12:08:07.703189 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703538 kubelet[2713]: W1216 12:08:07.703200 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703538 kubelet[2713]: E1216 12:08:07.703346 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703672 kubelet[2713]: E1216 12:08:07.703648 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703672 kubelet[2713]: W1216 12:08:07.703663 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703730 kubelet[2713]: E1216 12:08:07.703675 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.703895 kubelet[2713]: E1216 12:08:07.703877 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.703895 kubelet[2713]: W1216 12:08:07.703890 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.703956 kubelet[2713]: E1216 12:08:07.703900 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.704071 kubelet[2713]: E1216 12:08:07.704055 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.704071 kubelet[2713]: W1216 12:08:07.704066 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.704141 kubelet[2713]: E1216 12:08:07.704076 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.704500 kubelet[2713]: E1216 12:08:07.704476 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.704500 kubelet[2713]: W1216 12:08:07.704493 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.704589 kubelet[2713]: E1216 12:08:07.704506 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.705188 kubelet[2713]: E1216 12:08:07.705158 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.705188 kubelet[2713]: W1216 12:08:07.705175 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.705188 kubelet[2713]: E1216 12:08:07.705188 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.725434 kubelet[2713]: E1216 12:08:07.725398 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.725434 kubelet[2713]: W1216 12:08:07.725428 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.725607 kubelet[2713]: E1216 12:08:07.725451 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.725690 kubelet[2713]: E1216 12:08:07.725682 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.725751 kubelet[2713]: W1216 12:08:07.725692 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.725751 kubelet[2713]: E1216 12:08:07.725702 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.726340 kubelet[2713]: E1216 12:08:07.726258 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.726340 kubelet[2713]: W1216 12:08:07.726285 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.726340 kubelet[2713]: E1216 12:08:07.726307 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.727254 kubelet[2713]: E1216 12:08:07.727218 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.727254 kubelet[2713]: W1216 12:08:07.727244 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.727374 kubelet[2713]: E1216 12:08:07.727263 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.727605 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.729086 kubelet[2713]: W1216 12:08:07.727625 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.727640 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.728256 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.729086 kubelet[2713]: W1216 12:08:07.728272 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.728298 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.728556 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.729086 kubelet[2713]: W1216 12:08:07.728569 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.728581 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.729086 kubelet[2713]: E1216 12:08:07.728926 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.729363 kubelet[2713]: W1216 12:08:07.728940 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.729363 kubelet[2713]: E1216 12:08:07.728952 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.729501 kubelet[2713]: E1216 12:08:07.729427 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.729501 kubelet[2713]: W1216 12:08:07.729450 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.729501 kubelet[2713]: E1216 12:08:07.729465 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.730317 kubelet[2713]: E1216 12:08:07.730289 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.730317 kubelet[2713]: W1216 12:08:07.730308 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.730317 kubelet[2713]: E1216 12:08:07.730324 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.730530 kubelet[2713]: E1216 12:08:07.730512 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.730530 kubelet[2713]: W1216 12:08:07.730528 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.730627 kubelet[2713]: E1216 12:08:07.730537 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.730769 kubelet[2713]: E1216 12:08:07.730747 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.730769 kubelet[2713]: W1216 12:08:07.730762 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.731118 kubelet[2713]: E1216 12:08:07.730772 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.731118 kubelet[2713]: E1216 12:08:07.731077 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.731118 kubelet[2713]: W1216 12:08:07.731088 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.731118 kubelet[2713]: E1216 12:08:07.731100 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.731861 kubelet[2713]: E1216 12:08:07.731821 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.731861 kubelet[2713]: W1216 12:08:07.731838 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.731861 kubelet[2713]: E1216 12:08:07.731853 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.732160 kubelet[2713]: E1216 12:08:07.732139 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.732160 kubelet[2713]: W1216 12:08:07.732152 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.732160 kubelet[2713]: E1216 12:08:07.732162 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.732367 kubelet[2713]: E1216 12:08:07.732346 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.732367 kubelet[2713]: W1216 12:08:07.732359 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.732367 kubelet[2713]: E1216 12:08:07.732368 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.733266 kubelet[2713]: E1216 12:08:07.733186 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.733266 kubelet[2713]: W1216 12:08:07.733210 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.733266 kubelet[2713]: E1216 12:08:07.733225 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:07.737179 kubelet[2713]: E1216 12:08:07.737136 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:08:07.737179 kubelet[2713]: W1216 12:08:07.737170 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:08:07.737179 kubelet[2713]: E1216 12:08:07.737190 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:08:08.470713 containerd[1557]: time="2025-12-16T12:08:08.470662889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:08.471779 containerd[1557]: time="2025-12-16T12:08:08.471734307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Dec 16 12:08:08.472976 containerd[1557]: time="2025-12-16T12:08:08.472908174Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:08.482784 containerd[1557]: time="2025-12-16T12:08:08.482680028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:08.483478 containerd[1557]: time="2025-12-16T12:08:08.483202796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.089092867s" Dec 16 12:08:08.483478 containerd[1557]: time="2025-12-16T12:08:08.483244360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:08:08.486953 containerd[1557]: time="2025-12-16T12:08:08.486915695Z" level=info msg="CreateContainer within sandbox \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:08:08.500554 containerd[1557]: time="2025-12-16T12:08:08.500507538Z" level=info msg="Container 471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:08:08.503336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195785757.mount: Deactivated successfully. Dec 16 12:08:08.512535 containerd[1557]: time="2025-12-16T12:08:08.512486474Z" level=info msg="CreateContainer within sandbox \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c\"" Dec 16 12:08:08.513192 containerd[1557]: time="2025-12-16T12:08:08.513116132Z" level=info msg="StartContainer for \"471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c\"" Dec 16 12:08:08.515901 containerd[1557]: time="2025-12-16T12:08:08.515866583Z" level=info msg="connecting to shim 471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c" address="unix:///run/containerd/s/c4a7b51421c97d5933e77bdeb532595357f15b70ffa81b4284e3e6ffc2f4938f" protocol=ttrpc version=3 Dec 16 12:08:08.536317 systemd[1]: Started cri-containerd-471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c.scope - libcontainer container 471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c. Dec 16 12:08:08.582000 audit: BPF prog-id=164 op=LOAD Dec 16 12:08:08.584827 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 16 12:08:08.584906 kernel: audit: type=1334 audit(1765886888.582:548): prog-id=164 op=LOAD Dec 16 12:08:08.584931 kernel: audit: type=1300 audit(1765886888.582:548): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.592637 kernel: audit: type=1327 audit(1765886888.582:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.582000 audit: BPF prog-id=165 op=LOAD Dec 16 12:08:08.594459 kernel: audit: type=1334 audit(1765886888.582:549): prog-id=165 op=LOAD Dec 16 12:08:08.594657 kernel: audit: type=1300 audit(1765886888.582:549): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.602785 kernel: audit: type=1327 audit(1765886888.582:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.582000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:08:08.604596 kernel: audit: type=1334 audit(1765886888.582:550): prog-id=165 op=UNLOAD Dec 16 12:08:08.604651 kernel: audit: type=1300 audit(1765886888.582:550): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit[3415]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.613418 kernel: audit: type=1327 audit(1765886888.582:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.613601 kernel: audit: type=1334 audit(1765886888.582:551): prog-id=164 op=UNLOAD Dec 16 12:08:08.582000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:08:08.582000 audit[3415]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.582000 audit: BPF prog-id=166 op=LOAD Dec 16 12:08:08.582000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3227 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:08.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437316663393333333935616461616237643131333431383633363462 Dec 16 12:08:08.650112 containerd[1557]: time="2025-12-16T12:08:08.650008252Z" level=info msg="StartContainer for \"471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c\" returns successfully" Dec 16 12:08:08.651719 kubelet[2713]: I1216 12:08:08.651677 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:08:08.652435 kubelet[2713]: E1216 12:08:08.652235 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:08.663830 systemd[1]: cri-containerd-471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c.scope: Deactivated successfully. Dec 16 12:08:08.667000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:08:08.679651 containerd[1557]: time="2025-12-16T12:08:08.679551954Z" level=info msg="received container exit event container_id:\"471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c\" id:\"471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c\" pid:3428 exited_at:{seconds:1765886888 nanos:673947601}" Dec 16 12:08:08.717652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471fc933395adaab7d1134186364bf3a60c5e7e3aa35c406e36237e20918909c-rootfs.mount: Deactivated successfully. Dec 16 12:08:09.552724 kubelet[2713]: E1216 12:08:09.552619 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:09.662684 kubelet[2713]: E1216 12:08:09.662636 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:09.664088 containerd[1557]: time="2025-12-16T12:08:09.664045204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:08:11.352222 containerd[1557]: time="2025-12-16T12:08:11.352152460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:11.352754 containerd[1557]: time="2025-12-16T12:08:11.352704142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:08:11.353591 containerd[1557]: time="2025-12-16T12:08:11.353532764Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:11.355540 containerd[1557]: time="2025-12-16T12:08:11.355505233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:11.356352 containerd[1557]: time="2025-12-16T12:08:11.356253409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.692166162s" Dec 16 12:08:11.356352 containerd[1557]: time="2025-12-16T12:08:11.356283251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:08:11.361004 containerd[1557]: time="2025-12-16T12:08:11.360524091Z" level=info msg="CreateContainer within sandbox \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:08:11.369202 containerd[1557]: time="2025-12-16T12:08:11.369155982Z" level=info msg="Container 7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:08:11.381404 containerd[1557]: time="2025-12-16T12:08:11.381280575Z" level=info msg="CreateContainer within sandbox \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e\"" Dec 16 12:08:11.381990 containerd[1557]: time="2025-12-16T12:08:11.381964187Z" level=info msg="StartContainer for \"7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e\"" Dec 16 12:08:11.383526 containerd[1557]: time="2025-12-16T12:08:11.383493582Z" level=info msg="connecting to shim 7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e" address="unix:///run/containerd/s/c4a7b51421c97d5933e77bdeb532595357f15b70ffa81b4284e3e6ffc2f4938f" protocol=ttrpc version=3 Dec 16 12:08:11.405283 systemd[1]: Started cri-containerd-7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e.scope - libcontainer container 7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e. Dec 16 12:08:11.478000 audit: BPF prog-id=167 op=LOAD Dec 16 12:08:11.478000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3227 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:11.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303163373730656465303966616433306236376135613631656265 Dec 16 12:08:11.478000 audit: BPF prog-id=168 op=LOAD Dec 16 12:08:11.478000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=3227 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:11.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303163373730656465303966616433306236376135613631656265 Dec 16 12:08:11.478000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:08:11.478000 audit[3475]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:11.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303163373730656465303966616433306236376135613631656265 Dec 16 12:08:11.478000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:08:11.478000 audit[3475]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:11.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303163373730656465303966616433306236376135613631656265 Dec 16 12:08:11.478000 audit: BPF prog-id=169 op=LOAD Dec 16 12:08:11.478000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=3227 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:11.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303163373730656465303966616433306236376135613631656265 Dec 16 12:08:11.498972 containerd[1557]: time="2025-12-16T12:08:11.498918761Z" level=info msg="StartContainer for \"7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e\" returns successfully" Dec 16 12:08:11.552787 kubelet[2713]: E1216 12:08:11.552667 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:11.670009 kubelet[2713]: E1216 12:08:11.669881 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:12.130146 systemd[1]: cri-containerd-7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e.scope: Deactivated successfully. Dec 16 12:08:12.130572 systemd[1]: cri-containerd-7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e.scope: Consumed 491ms CPU time, 175M memory peak, 5.1M read from disk, 165.9M written to disk. Dec 16 12:08:12.136000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:08:12.142543 containerd[1557]: time="2025-12-16T12:08:12.142492874Z" level=info msg="received container exit event container_id:\"7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e\" id:\"7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e\" pid:3488 exited_at:{seconds:1765886892 nanos:142254618}" Dec 16 12:08:12.176388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7001c770ede09fad30b67a5a61ebe59e169fa942fce0e1fd496720eff50e488e-rootfs.mount: Deactivated successfully. Dec 16 12:08:12.205601 kubelet[2713]: I1216 12:08:12.205566 2713 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:08:12.263179 systemd[1]: Created slice kubepods-burstable-pod98d1e4ea_915a_4b83_837d_93550126bdaf.slice - libcontainer container kubepods-burstable-pod98d1e4ea_915a_4b83_837d_93550126bdaf.slice. Dec 16 12:08:12.267257 kubelet[2713]: I1216 12:08:12.267200 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98d1e4ea-915a-4b83-837d-93550126bdaf-config-volume\") pod \"coredns-674b8bbfcf-2spcj\" (UID: \"98d1e4ea-915a-4b83-837d-93550126bdaf\") " pod="kube-system/coredns-674b8bbfcf-2spcj" Dec 16 12:08:12.267257 kubelet[2713]: I1216 12:08:12.267250 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cce57e8e-9a64-42b6-8b26-7e3f06d75548-config\") pod \"goldmane-666569f655-2rr6k\" (UID: \"cce57e8e-9a64-42b6-8b26-7e3f06d75548\") " pod="calico-system/goldmane-666569f655-2rr6k" Dec 16 12:08:12.267409 kubelet[2713]: I1216 12:08:12.267270 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bhp8\" (UniqueName: \"kubernetes.io/projected/98d1e4ea-915a-4b83-837d-93550126bdaf-kube-api-access-7bhp8\") pod \"coredns-674b8bbfcf-2spcj\" (UID: \"98d1e4ea-915a-4b83-837d-93550126bdaf\") " pod="kube-system/coredns-674b8bbfcf-2spcj" Dec 16 12:08:12.267409 kubelet[2713]: I1216 12:08:12.267289 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cce57e8e-9a64-42b6-8b26-7e3f06d75548-goldmane-ca-bundle\") pod \"goldmane-666569f655-2rr6k\" (UID: \"cce57e8e-9a64-42b6-8b26-7e3f06d75548\") " pod="calico-system/goldmane-666569f655-2rr6k" Dec 16 12:08:12.267409 kubelet[2713]: I1216 12:08:12.267306 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc2d4454-3b99-4f0b-9a73-a20874110667-config-volume\") pod \"coredns-674b8bbfcf-fp6fd\" (UID: \"cc2d4454-3b99-4f0b-9a73-a20874110667\") " pod="kube-system/coredns-674b8bbfcf-fp6fd" Dec 16 12:08:12.267409 kubelet[2713]: I1216 12:08:12.267325 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nvz\" (UniqueName: \"kubernetes.io/projected/cc2d4454-3b99-4f0b-9a73-a20874110667-kube-api-access-67nvz\") pod \"coredns-674b8bbfcf-fp6fd\" (UID: \"cc2d4454-3b99-4f0b-9a73-a20874110667\") " pod="kube-system/coredns-674b8bbfcf-fp6fd" Dec 16 12:08:12.267409 kubelet[2713]: I1216 12:08:12.267343 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94n8v\" (UniqueName: \"kubernetes.io/projected/35f02a41-1704-43ba-9b8e-82676bfbc14d-kube-api-access-94n8v\") pod \"calico-apiserver-7f5b89cfdd-g5j9p\" (UID: \"35f02a41-1704-43ba-9b8e-82676bfbc14d\") " pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" Dec 16 12:08:12.267533 kubelet[2713]: I1216 12:08:12.267360 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cce57e8e-9a64-42b6-8b26-7e3f06d75548-goldmane-key-pair\") pod \"goldmane-666569f655-2rr6k\" (UID: \"cce57e8e-9a64-42b6-8b26-7e3f06d75548\") " pod="calico-system/goldmane-666569f655-2rr6k" Dec 16 12:08:12.267533 kubelet[2713]: I1216 12:08:12.267379 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-backend-key-pair\") pod \"whisker-dccb4d5cc-pslrb\" (UID: \"a61db773-a925-451f-85e2-1020f71a4c7e\") " pod="calico-system/whisker-dccb4d5cc-pslrb" Dec 16 12:08:12.267533 kubelet[2713]: I1216 12:08:12.267396 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-ca-bundle\") pod \"whisker-dccb4d5cc-pslrb\" (UID: \"a61db773-a925-451f-85e2-1020f71a4c7e\") " pod="calico-system/whisker-dccb4d5cc-pslrb" Dec 16 12:08:12.267533 kubelet[2713]: I1216 12:08:12.267412 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45zm\" (UniqueName: \"kubernetes.io/projected/a61db773-a925-451f-85e2-1020f71a4c7e-kube-api-access-z45zm\") pod \"whisker-dccb4d5cc-pslrb\" (UID: \"a61db773-a925-451f-85e2-1020f71a4c7e\") " pod="calico-system/whisker-dccb4d5cc-pslrb" Dec 16 12:08:12.267533 kubelet[2713]: I1216 12:08:12.267430 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crn5q\" (UniqueName: \"kubernetes.io/projected/cce57e8e-9a64-42b6-8b26-7e3f06d75548-kube-api-access-crn5q\") pod \"goldmane-666569f655-2rr6k\" (UID: \"cce57e8e-9a64-42b6-8b26-7e3f06d75548\") " pod="calico-system/goldmane-666569f655-2rr6k" Dec 16 12:08:12.267692 kubelet[2713]: I1216 12:08:12.267446 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35f02a41-1704-43ba-9b8e-82676bfbc14d-calico-apiserver-certs\") pod \"calico-apiserver-7f5b89cfdd-g5j9p\" (UID: \"35f02a41-1704-43ba-9b8e-82676bfbc14d\") " pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" Dec 16 12:08:12.272607 systemd[1]: Created slice kubepods-besteffort-pod35f02a41_1704_43ba_9b8e_82676bfbc14d.slice - libcontainer container kubepods-besteffort-pod35f02a41_1704_43ba_9b8e_82676bfbc14d.slice. Dec 16 12:08:12.282384 systemd[1]: Created slice kubepods-besteffort-podcce57e8e_9a64_42b6_8b26_7e3f06d75548.slice - libcontainer container kubepods-besteffort-podcce57e8e_9a64_42b6_8b26_7e3f06d75548.slice. Dec 16 12:08:12.288879 systemd[1]: Created slice kubepods-burstable-podcc2d4454_3b99_4f0b_9a73_a20874110667.slice - libcontainer container kubepods-burstable-podcc2d4454_3b99_4f0b_9a73_a20874110667.slice. Dec 16 12:08:12.295041 systemd[1]: Created slice kubepods-besteffort-poda61db773_a925_451f_85e2_1020f71a4c7e.slice - libcontainer container kubepods-besteffort-poda61db773_a925_451f_85e2_1020f71a4c7e.slice. Dec 16 12:08:12.302394 kubelet[2713]: I1216 12:08:12.302350 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:08:12.303660 kubelet[2713]: E1216 12:08:12.303612 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:12.304691 systemd[1]: Created slice kubepods-besteffort-podbc6dd9ec_9c70_48ca_8951_46abf4919f50.slice - libcontainer container kubepods-besteffort-podbc6dd9ec_9c70_48ca_8951_46abf4919f50.slice. Dec 16 12:08:12.311221 systemd[1]: Created slice kubepods-besteffort-pod0ff6ef5a_c884_4b41_929a_344b8f5bf998.slice - libcontainer container kubepods-besteffort-pod0ff6ef5a_c884_4b41_929a_344b8f5bf998.slice. Dec 16 12:08:12.368496 kubelet[2713]: I1216 12:08:12.368443 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc6dd9ec-9c70-48ca-8951-46abf4919f50-tigera-ca-bundle\") pod \"calico-kube-controllers-5d6dbc8bbd-fps4t\" (UID: \"bc6dd9ec-9c70-48ca-8951-46abf4919f50\") " pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" Dec 16 12:08:12.368496 kubelet[2713]: I1216 12:08:12.368487 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ff6ef5a-c884-4b41-929a-344b8f5bf998-calico-apiserver-certs\") pod \"calico-apiserver-7f5b89cfdd-jgs6w\" (UID: \"0ff6ef5a-c884-4b41-929a-344b8f5bf998\") " pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" Dec 16 12:08:12.368496 kubelet[2713]: I1216 12:08:12.368507 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7zd\" (UniqueName: \"kubernetes.io/projected/0ff6ef5a-c884-4b41-929a-344b8f5bf998-kube-api-access-rl7zd\") pod \"calico-apiserver-7f5b89cfdd-jgs6w\" (UID: \"0ff6ef5a-c884-4b41-929a-344b8f5bf998\") " pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" Dec 16 12:08:12.368678 kubelet[2713]: I1216 12:08:12.368550 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnh8g\" (UniqueName: \"kubernetes.io/projected/bc6dd9ec-9c70-48ca-8951-46abf4919f50-kube-api-access-pnh8g\") pod \"calico-kube-controllers-5d6dbc8bbd-fps4t\" (UID: \"bc6dd9ec-9c70-48ca-8951-46abf4919f50\") " pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" Dec 16 12:08:12.413000 audit[3528]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:12.413000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffddfdaf00 a2=0 a3=1 items=0 ppid=2826 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:12.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:12.421000 audit[3528]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:12.421000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffddfdaf00 a2=0 a3=1 items=0 ppid=2826 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:12.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:12.569866 kubelet[2713]: E1216 12:08:12.569824 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:12.570674 containerd[1557]: time="2025-12-16T12:08:12.570624602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2spcj,Uid:98d1e4ea-915a-4b83-837d-93550126bdaf,Namespace:kube-system,Attempt:0,}" Dec 16 12:08:12.578417 containerd[1557]: time="2025-12-16T12:08:12.578373470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-g5j9p,Uid:35f02a41-1704-43ba-9b8e-82676bfbc14d,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:08:12.586201 containerd[1557]: time="2025-12-16T12:08:12.586061373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2rr6k,Uid:cce57e8e-9a64-42b6-8b26-7e3f06d75548,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:12.592061 kubelet[2713]: E1216 12:08:12.592001 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:12.593122 containerd[1557]: time="2025-12-16T12:08:12.592670360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fp6fd,Uid:cc2d4454-3b99-4f0b-9a73-a20874110667,Namespace:kube-system,Attempt:0,}" Dec 16 12:08:12.601169 containerd[1557]: time="2025-12-16T12:08:12.600969666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dccb4d5cc-pslrb,Uid:a61db773-a925-451f-85e2-1020f71a4c7e,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:12.610138 containerd[1557]: time="2025-12-16T12:08:12.610070669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6dbc8bbd-fps4t,Uid:bc6dd9ec-9c70-48ca-8951-46abf4919f50,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:12.615793 containerd[1557]: time="2025-12-16T12:08:12.615749350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-jgs6w,Uid:0ff6ef5a-c884-4b41-929a-344b8f5bf998,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:08:12.682869 kubelet[2713]: E1216 12:08:12.682758 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:12.683999 kubelet[2713]: E1216 12:08:12.683622 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:12.685288 containerd[1557]: time="2025-12-16T12:08:12.685246581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:08:12.721751 containerd[1557]: time="2025-12-16T12:08:12.721634311Z" level=error msg="Failed to destroy network for sandbox \"4807a9df64bd4456ba47abc7c9014b43fc8319b7cd819691465a5acdd2c15cea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.724568 containerd[1557]: time="2025-12-16T12:08:12.724508274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2rr6k,Uid:cce57e8e-9a64-42b6-8b26-7e3f06d75548,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4807a9df64bd4456ba47abc7c9014b43fc8319b7cd819691465a5acdd2c15cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.733675 kubelet[2713]: E1216 12:08:12.733596 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4807a9df64bd4456ba47abc7c9014b43fc8319b7cd819691465a5acdd2c15cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.733816 kubelet[2713]: E1216 12:08:12.733700 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4807a9df64bd4456ba47abc7c9014b43fc8319b7cd819691465a5acdd2c15cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2rr6k" Dec 16 12:08:12.733816 kubelet[2713]: E1216 12:08:12.733722 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4807a9df64bd4456ba47abc7c9014b43fc8319b7cd819691465a5acdd2c15cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2rr6k" Dec 16 12:08:12.733816 kubelet[2713]: E1216 12:08:12.733783 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2rr6k_calico-system(cce57e8e-9a64-42b6-8b26-7e3f06d75548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2rr6k_calico-system(cce57e8e-9a64-42b6-8b26-7e3f06d75548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4807a9df64bd4456ba47abc7c9014b43fc8319b7cd819691465a5acdd2c15cea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:08:12.742505 containerd[1557]: time="2025-12-16T12:08:12.742436781Z" level=error msg="Failed to destroy network for sandbox \"8ac1028d9633f3b83458c8753a4ebab3f5aca44c0f3d1b926150f9b988af739d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.745882 containerd[1557]: time="2025-12-16T12:08:12.745821500Z" level=error msg="Failed to destroy network for sandbox \"5867d37bca5391936ace41718f3d3ae9272083483c1c8ca48fcf0c2370a1624e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.746612 containerd[1557]: time="2025-12-16T12:08:12.746574713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fp6fd,Uid:cc2d4454-3b99-4f0b-9a73-a20874110667,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac1028d9633f3b83458c8753a4ebab3f5aca44c0f3d1b926150f9b988af739d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.747151 kubelet[2713]: E1216 12:08:12.747000 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac1028d9633f3b83458c8753a4ebab3f5aca44c0f3d1b926150f9b988af739d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.747265 kubelet[2713]: E1216 12:08:12.747200 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac1028d9633f3b83458c8753a4ebab3f5aca44c0f3d1b926150f9b988af739d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fp6fd" Dec 16 12:08:12.747265 kubelet[2713]: E1216 12:08:12.747228 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac1028d9633f3b83458c8753a4ebab3f5aca44c0f3d1b926150f9b988af739d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fp6fd" Dec 16 12:08:12.747501 kubelet[2713]: E1216 12:08:12.747309 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fp6fd_kube-system(cc2d4454-3b99-4f0b-9a73-a20874110667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fp6fd_kube-system(cc2d4454-3b99-4f0b-9a73-a20874110667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ac1028d9633f3b83458c8753a4ebab3f5aca44c0f3d1b926150f9b988af739d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fp6fd" podUID="cc2d4454-3b99-4f0b-9a73-a20874110667" Dec 16 12:08:12.748987 containerd[1557]: time="2025-12-16T12:08:12.748913599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2spcj,Uid:98d1e4ea-915a-4b83-837d-93550126bdaf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867d37bca5391936ace41718f3d3ae9272083483c1c8ca48fcf0c2370a1624e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.749592 kubelet[2713]: E1216 12:08:12.749182 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867d37bca5391936ace41718f3d3ae9272083483c1c8ca48fcf0c2370a1624e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.749592 kubelet[2713]: E1216 12:08:12.749235 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867d37bca5391936ace41718f3d3ae9272083483c1c8ca48fcf0c2370a1624e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2spcj" Dec 16 12:08:12.749592 kubelet[2713]: E1216 12:08:12.749255 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867d37bca5391936ace41718f3d3ae9272083483c1c8ca48fcf0c2370a1624e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2spcj" Dec 16 12:08:12.749774 kubelet[2713]: E1216 12:08:12.749313 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2spcj_kube-system(98d1e4ea-915a-4b83-837d-93550126bdaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2spcj_kube-system(98d1e4ea-915a-4b83-837d-93550126bdaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5867d37bca5391936ace41718f3d3ae9272083483c1c8ca48fcf0c2370a1624e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2spcj" podUID="98d1e4ea-915a-4b83-837d-93550126bdaf" Dec 16 12:08:12.753260 containerd[1557]: time="2025-12-16T12:08:12.753158379Z" level=error msg="Failed to destroy network for sandbox \"26558f9dc4c272f3e4996a1dfda6ef76bb777a7a15c85fb1b480608a5bf1b608\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.754258 containerd[1557]: time="2025-12-16T12:08:12.754099365Z" level=error msg="Failed to destroy network for sandbox \"66588ad59c4955aeb56824056343dd2c4c26942ed2529c3b9266f92fc6e6afc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.755761 containerd[1557]: time="2025-12-16T12:08:12.755720600Z" level=error msg="Failed to destroy network for sandbox \"d3102c738bcbb392a274d99059c7079fac7075aeebb2ecc37a6500798c70a4ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.755876 containerd[1557]: time="2025-12-16T12:08:12.755741841Z" level=error msg="Failed to destroy network for sandbox \"ef89ae37951acf65890cc5891d29d21ed85683867c04e5d87154ed59e0baa89b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.757513 containerd[1557]: time="2025-12-16T12:08:12.757447282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-g5j9p,Uid:35f02a41-1704-43ba-9b8e-82676bfbc14d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26558f9dc4c272f3e4996a1dfda6ef76bb777a7a15c85fb1b480608a5bf1b608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.757976 kubelet[2713]: E1216 12:08:12.757804 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26558f9dc4c272f3e4996a1dfda6ef76bb777a7a15c85fb1b480608a5bf1b608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.758142 kubelet[2713]: E1216 12:08:12.758116 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26558f9dc4c272f3e4996a1dfda6ef76bb777a7a15c85fb1b480608a5bf1b608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" Dec 16 12:08:12.758142 kubelet[2713]: E1216 12:08:12.758173 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26558f9dc4c272f3e4996a1dfda6ef76bb777a7a15c85fb1b480608a5bf1b608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" Dec 16 12:08:12.758421 kubelet[2713]: E1216 12:08:12.758390 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5b89cfdd-g5j9p_calico-apiserver(35f02a41-1704-43ba-9b8e-82676bfbc14d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5b89cfdd-g5j9p_calico-apiserver(35f02a41-1704-43ba-9b8e-82676bfbc14d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26558f9dc4c272f3e4996a1dfda6ef76bb777a7a15c85fb1b480608a5bf1b608\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:08:12.760393 containerd[1557]: time="2025-12-16T12:08:12.760328085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dccb4d5cc-pslrb,Uid:a61db773-a925-451f-85e2-1020f71a4c7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66588ad59c4955aeb56824056343dd2c4c26942ed2529c3b9266f92fc6e6afc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.760761 kubelet[2713]: E1216 12:08:12.760694 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66588ad59c4955aeb56824056343dd2c4c26942ed2529c3b9266f92fc6e6afc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.761151 kubelet[2713]: E1216 12:08:12.760784 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66588ad59c4955aeb56824056343dd2c4c26942ed2529c3b9266f92fc6e6afc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dccb4d5cc-pslrb" Dec 16 12:08:12.761151 kubelet[2713]: E1216 12:08:12.760837 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66588ad59c4955aeb56824056343dd2c4c26942ed2529c3b9266f92fc6e6afc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dccb4d5cc-pslrb" Dec 16 12:08:12.761151 kubelet[2713]: E1216 12:08:12.760892 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dccb4d5cc-pslrb_calico-system(a61db773-a925-451f-85e2-1020f71a4c7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dccb4d5cc-pslrb_calico-system(a61db773-a925-451f-85e2-1020f71a4c7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66588ad59c4955aeb56824056343dd2c4c26942ed2529c3b9266f92fc6e6afc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dccb4d5cc-pslrb" podUID="a61db773-a925-451f-85e2-1020f71a4c7e" Dec 16 12:08:12.761979 containerd[1557]: time="2025-12-16T12:08:12.761756066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-jgs6w,Uid:0ff6ef5a-c884-4b41-929a-344b8f5bf998,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3102c738bcbb392a274d99059c7079fac7075aeebb2ecc37a6500798c70a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.762418 kubelet[2713]: E1216 12:08:12.762370 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3102c738bcbb392a274d99059c7079fac7075aeebb2ecc37a6500798c70a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.762495 kubelet[2713]: E1216 12:08:12.762429 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3102c738bcbb392a274d99059c7079fac7075aeebb2ecc37a6500798c70a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" Dec 16 12:08:12.762495 kubelet[2713]: E1216 12:08:12.762451 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3102c738bcbb392a274d99059c7079fac7075aeebb2ecc37a6500798c70a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" Dec 16 12:08:12.762545 kubelet[2713]: E1216 12:08:12.762491 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5b89cfdd-jgs6w_calico-apiserver(0ff6ef5a-c884-4b41-929a-344b8f5bf998)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5b89cfdd-jgs6w_calico-apiserver(0ff6ef5a-c884-4b41-929a-344b8f5bf998)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3102c738bcbb392a274d99059c7079fac7075aeebb2ecc37a6500798c70a4ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:08:12.762874 containerd[1557]: time="2025-12-16T12:08:12.762820661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6dbc8bbd-fps4t,Uid:bc6dd9ec-9c70-48ca-8951-46abf4919f50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef89ae37951acf65890cc5891d29d21ed85683867c04e5d87154ed59e0baa89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.764058 kubelet[2713]: E1216 12:08:12.762992 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef89ae37951acf65890cc5891d29d21ed85683867c04e5d87154ed59e0baa89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:12.764058 kubelet[2713]: E1216 12:08:12.763143 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef89ae37951acf65890cc5891d29d21ed85683867c04e5d87154ed59e0baa89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" Dec 16 12:08:12.764170 kubelet[2713]: E1216 12:08:12.764081 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef89ae37951acf65890cc5891d29d21ed85683867c04e5d87154ed59e0baa89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" Dec 16 12:08:12.764197 kubelet[2713]: E1216 12:08:12.764162 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d6dbc8bbd-fps4t_calico-system(bc6dd9ec-9c70-48ca-8951-46abf4919f50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d6dbc8bbd-fps4t_calico-system(bc6dd9ec-9c70-48ca-8951-46abf4919f50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef89ae37951acf65890cc5891d29d21ed85683867c04e5d87154ed59e0baa89b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:08:13.562349 systemd[1]: Created slice kubepods-besteffort-pod8b37b908_ae1e_416d_92b4_0b4c5064435f.slice - libcontainer container kubepods-besteffort-pod8b37b908_ae1e_416d_92b4_0b4c5064435f.slice. Dec 16 12:08:13.568365 containerd[1557]: time="2025-12-16T12:08:13.568326666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fx4jj,Uid:8b37b908-ae1e-416d-92b4-0b4c5064435f,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:13.655938 containerd[1557]: time="2025-12-16T12:08:13.655720775Z" level=error msg="Failed to destroy network for sandbox \"782b0244efdd40d16f2cf29190eb6b7a0614d167987b2363e5d4e2b88cfbb3af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:13.657721 systemd[1]: run-netns-cni\x2d70d68dc1\x2d9e36\x2da23a\x2dd8ce\x2dd1e0b5b55fe9.mount: Deactivated successfully. Dec 16 12:08:13.665533 containerd[1557]: time="2025-12-16T12:08:13.665480541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fx4jj,Uid:8b37b908-ae1e-416d-92b4-0b4c5064435f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"782b0244efdd40d16f2cf29190eb6b7a0614d167987b2363e5d4e2b88cfbb3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:13.665740 kubelet[2713]: E1216 12:08:13.665705 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782b0244efdd40d16f2cf29190eb6b7a0614d167987b2363e5d4e2b88cfbb3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:08:13.666278 kubelet[2713]: E1216 12:08:13.665757 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782b0244efdd40d16f2cf29190eb6b7a0614d167987b2363e5d4e2b88cfbb3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:13.666278 kubelet[2713]: E1216 12:08:13.665782 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782b0244efdd40d16f2cf29190eb6b7a0614d167987b2363e5d4e2b88cfbb3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fx4jj" Dec 16 12:08:13.666278 kubelet[2713]: E1216 12:08:13.665826 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"782b0244efdd40d16f2cf29190eb6b7a0614d167987b2363e5d4e2b88cfbb3af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:15.700302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476379141.mount: Deactivated successfully. Dec 16 12:08:15.962156 containerd[1557]: time="2025-12-16T12:08:15.961806354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:15.962796 containerd[1557]: time="2025-12-16T12:08:15.962742809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:08:15.966453 containerd[1557]: time="2025-12-16T12:08:15.966366180Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:15.968504 containerd[1557]: time="2025-12-16T12:08:15.968446021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:08:15.969089 containerd[1557]: time="2025-12-16T12:08:15.969063177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.283777113s" Dec 16 12:08:15.969136 containerd[1557]: time="2025-12-16T12:08:15.969093619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:08:15.985732 containerd[1557]: time="2025-12-16T12:08:15.985673104Z" level=info msg="CreateContainer within sandbox \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:08:16.004070 containerd[1557]: time="2025-12-16T12:08:16.003399447Z" level=info msg="Container ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:08:16.014498 containerd[1557]: time="2025-12-16T12:08:16.014455091Z" level=info msg="CreateContainer within sandbox \"459f295847632c8522898eb942987db02066d6abb1e837d99432e87ecd06085e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950\"" Dec 16 12:08:16.015419 containerd[1557]: time="2025-12-16T12:08:16.015344659Z" level=info msg="StartContainer for \"ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950\"" Dec 16 12:08:16.016883 containerd[1557]: time="2025-12-16T12:08:16.016853421Z" level=info msg="connecting to shim ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950" address="unix:///run/containerd/s/c4a7b51421c97d5933e77bdeb532595357f15b70ffa81b4284e3e6ffc2f4938f" protocol=ttrpc version=3 Dec 16 12:08:16.043255 systemd[1]: Started cri-containerd-ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950.scope - libcontainer container ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950. Dec 16 12:08:16.111432 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 12:08:16.111543 kernel: audit: type=1334 audit(1765886896.108:562): prog-id=170 op=LOAD Dec 16 12:08:16.108000 audit: BPF prog-id=170 op=LOAD Dec 16 12:08:16.108000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.115782 kernel: audit: type=1300 audit(1765886896.108:562): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.120119 kernel: audit: type=1327 audit(1765886896.108:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.108000 audit: BPF prog-id=171 op=LOAD Dec 16 12:08:16.121256 kernel: audit: type=1334 audit(1765886896.108:563): prog-id=171 op=LOAD Dec 16 12:08:16.108000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.128749 kernel: audit: type=1300 audit(1765886896.108:563): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.128856 kernel: audit: type=1327 audit(1765886896.108:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.109000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:08:16.133584 kernel: audit: type=1334 audit(1765886896.109:564): prog-id=171 op=UNLOAD Dec 16 12:08:16.133639 kernel: audit: type=1300 audit(1765886896.109:564): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.109000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.139289 containerd[1557]: time="2025-12-16T12:08:16.139248381Z" level=info msg="StartContainer for \"ae2bc60ca5aec98c7a7d9132fb345519a26e89d3371e208cdefe724ee067a950\" returns successfully" Dec 16 12:08:16.141216 kernel: audit: type=1327 audit(1765886896.109:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.109000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:08:16.142345 kernel: audit: type=1334 audit(1765886896.109:565): prog-id=170 op=UNLOAD Dec 16 12:08:16.109000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.109000 audit: BPF prog-id=172 op=LOAD Dec 16 12:08:16.109000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3227 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:16.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326263363063613561656339386337613764393133326662333435 Dec 16 12:08:16.262872 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:08:16.262983 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:08:16.391046 kubelet[2713]: I1216 12:08:16.390551 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z45zm\" (UniqueName: \"kubernetes.io/projected/a61db773-a925-451f-85e2-1020f71a4c7e-kube-api-access-z45zm\") pod \"a61db773-a925-451f-85e2-1020f71a4c7e\" (UID: \"a61db773-a925-451f-85e2-1020f71a4c7e\") " Dec 16 12:08:16.391046 kubelet[2713]: I1216 12:08:16.390605 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-backend-key-pair\") pod \"a61db773-a925-451f-85e2-1020f71a4c7e\" (UID: \"a61db773-a925-451f-85e2-1020f71a4c7e\") " Dec 16 12:08:16.391046 kubelet[2713]: I1216 12:08:16.390634 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-ca-bundle\") pod \"a61db773-a925-451f-85e2-1020f71a4c7e\" (UID: \"a61db773-a925-451f-85e2-1020f71a4c7e\") " Dec 16 12:08:16.404071 kubelet[2713]: I1216 12:08:16.403786 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a61db773-a925-451f-85e2-1020f71a4c7e" (UID: "a61db773-a925-451f-85e2-1020f71a4c7e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:08:16.410975 kubelet[2713]: I1216 12:08:16.410891 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61db773-a925-451f-85e2-1020f71a4c7e-kube-api-access-z45zm" (OuterVolumeSpecName: "kube-api-access-z45zm") pod "a61db773-a925-451f-85e2-1020f71a4c7e" (UID: "a61db773-a925-451f-85e2-1020f71a4c7e"). InnerVolumeSpecName "kube-api-access-z45zm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:08:16.414679 kubelet[2713]: I1216 12:08:16.414531 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a61db773-a925-451f-85e2-1020f71a4c7e" (UID: "a61db773-a925-451f-85e2-1020f71a4c7e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:08:16.491478 kubelet[2713]: I1216 12:08:16.491431 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:08:16.491478 kubelet[2713]: I1216 12:08:16.491480 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z45zm\" (UniqueName: \"kubernetes.io/projected/a61db773-a925-451f-85e2-1020f71a4c7e-kube-api-access-z45zm\") on node \"localhost\" DevicePath \"\"" Dec 16 12:08:16.491478 kubelet[2713]: I1216 12:08:16.491493 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a61db773-a925-451f-85e2-1020f71a4c7e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:08:16.701091 systemd[1]: var-lib-kubelet-pods-a61db773\x2da925\x2d451f\x2d85e2\x2d1020f71a4c7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz45zm.mount: Deactivated successfully. Dec 16 12:08:16.701201 systemd[1]: var-lib-kubelet-pods-a61db773\x2da925\x2d451f\x2d85e2\x2d1020f71a4c7e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:08:16.702709 kubelet[2713]: E1216 12:08:16.702680 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:16.708975 systemd[1]: Removed slice kubepods-besteffort-poda61db773_a925_451f_85e2_1020f71a4c7e.slice - libcontainer container kubepods-besteffort-poda61db773_a925_451f_85e2_1020f71a4c7e.slice. Dec 16 12:08:16.719801 kubelet[2713]: I1216 12:08:16.719359 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k6klz" podStartSLOduration=1.896758438 podStartE2EDuration="11.719244756s" podCreationTimestamp="2025-12-16 12:08:05 +0000 UTC" firstStartedPulling="2025-12-16 12:08:06.147243698 +0000 UTC m=+22.763645109" lastFinishedPulling="2025-12-16 12:08:15.969730016 +0000 UTC m=+32.586131427" observedRunningTime="2025-12-16 12:08:16.718122695 +0000 UTC m=+33.334524146" watchObservedRunningTime="2025-12-16 12:08:16.719244756 +0000 UTC m=+33.335646167" Dec 16 12:08:16.782212 systemd[1]: Created slice kubepods-besteffort-podad89a778_3861_43c0_9fbf_a4ff7b7dedcf.slice - libcontainer container kubepods-besteffort-podad89a778_3861_43c0_9fbf_a4ff7b7dedcf.slice. Dec 16 12:08:16.793428 kubelet[2713]: I1216 12:08:16.793351 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad89a778-3861-43c0-9fbf-a4ff7b7dedcf-whisker-backend-key-pair\") pod \"whisker-6c68d78889-wl87r\" (UID: \"ad89a778-3861-43c0-9fbf-a4ff7b7dedcf\") " pod="calico-system/whisker-6c68d78889-wl87r" Dec 16 12:08:16.793546 kubelet[2713]: I1216 12:08:16.793494 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad89a778-3861-43c0-9fbf-a4ff7b7dedcf-whisker-ca-bundle\") pod \"whisker-6c68d78889-wl87r\" (UID: \"ad89a778-3861-43c0-9fbf-a4ff7b7dedcf\") " pod="calico-system/whisker-6c68d78889-wl87r" Dec 16 12:08:16.793546 kubelet[2713]: I1216 12:08:16.793527 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdk8\" (UniqueName: \"kubernetes.io/projected/ad89a778-3861-43c0-9fbf-a4ff7b7dedcf-kube-api-access-qwdk8\") pod \"whisker-6c68d78889-wl87r\" (UID: \"ad89a778-3861-43c0-9fbf-a4ff7b7dedcf\") " pod="calico-system/whisker-6c68d78889-wl87r" Dec 16 12:08:17.086849 containerd[1557]: time="2025-12-16T12:08:17.086787203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c68d78889-wl87r,Uid:ad89a778-3861-43c0-9fbf-a4ff7b7dedcf,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:17.272255 systemd-networkd[1471]: calif44a7b4edfa: Link UP Dec 16 12:08:17.272775 systemd-networkd[1471]: calif44a7b4edfa: Gained carrier Dec 16 12:08:17.285094 containerd[1557]: 2025-12-16 12:08:17.110 [INFO][3892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:08:17.285094 containerd[1557]: 2025-12-16 12:08:17.141 [INFO][3892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c68d78889--wl87r-eth0 whisker-6c68d78889- calico-system ad89a778-3861-43c0-9fbf-a4ff7b7dedcf 905 0 2025-12-16 12:08:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c68d78889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c68d78889-wl87r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif44a7b4edfa [] [] }} ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-" Dec 16 12:08:17.285094 containerd[1557]: 2025-12-16 12:08:17.142 [INFO][3892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.285094 containerd[1557]: 2025-12-16 12:08:17.213 [INFO][3906] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" HandleID="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Workload="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.213 [INFO][3906] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" HandleID="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Workload="localhost-k8s-whisker--6c68d78889--wl87r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000522500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c68d78889-wl87r", "timestamp":"2025-12-16 12:08:17.213608092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.213 [INFO][3906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.213 [INFO][3906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.214 [INFO][3906] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.226 [INFO][3906] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" host="localhost" Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.238 [INFO][3906] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.243 [INFO][3906] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.248 [INFO][3906] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.251 [INFO][3906] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:17.285346 containerd[1557]: 2025-12-16 12:08:17.251 [INFO][3906] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" host="localhost" Dec 16 12:08:17.285542 containerd[1557]: 2025-12-16 12:08:17.252 [INFO][3906] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6 Dec 16 12:08:17.285542 containerd[1557]: 2025-12-16 12:08:17.257 [INFO][3906] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" host="localhost" Dec 16 12:08:17.285542 containerd[1557]: 2025-12-16 12:08:17.262 [INFO][3906] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" host="localhost" Dec 16 12:08:17.285542 containerd[1557]: 2025-12-16 12:08:17.262 [INFO][3906] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" host="localhost" Dec 16 12:08:17.285542 containerd[1557]: 2025-12-16 12:08:17.262 [INFO][3906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:17.285542 containerd[1557]: 2025-12-16 12:08:17.262 [INFO][3906] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" HandleID="k8s-pod-network.b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Workload="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.285655 containerd[1557]: 2025-12-16 12:08:17.265 [INFO][3892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c68d78889--wl87r-eth0", GenerateName:"whisker-6c68d78889-", Namespace:"calico-system", SelfLink:"", UID:"ad89a778-3861-43c0-9fbf-a4ff7b7dedcf", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c68d78889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c68d78889-wl87r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif44a7b4edfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:17.285655 containerd[1557]: 2025-12-16 12:08:17.265 [INFO][3892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.285717 containerd[1557]: 2025-12-16 12:08:17.265 [INFO][3892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif44a7b4edfa ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.285717 containerd[1557]: 2025-12-16 12:08:17.272 [INFO][3892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.285755 containerd[1557]: 2025-12-16 12:08:17.273 [INFO][3892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c68d78889--wl87r-eth0", GenerateName:"whisker-6c68d78889-", Namespace:"calico-system", SelfLink:"", UID:"ad89a778-3861-43c0-9fbf-a4ff7b7dedcf", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c68d78889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6", Pod:"whisker-6c68d78889-wl87r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif44a7b4edfa", MAC:"22:37:ba:de:66:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:17.285800 containerd[1557]: 2025-12-16 12:08:17.282 [INFO][3892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" Namespace="calico-system" Pod="whisker-6c68d78889-wl87r" WorkloadEndpoint="localhost-k8s-whisker--6c68d78889--wl87r-eth0" Dec 16 12:08:17.382049 containerd[1557]: time="2025-12-16T12:08:17.381909863Z" level=info msg="connecting to shim b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6" address="unix:///run/containerd/s/1c5921d18d8818a971b390455445295dee32bce40083af409d44302d78a6f37a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:17.421276 systemd[1]: Started cri-containerd-b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6.scope - libcontainer container b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6. Dec 16 12:08:17.431000 audit: BPF prog-id=173 op=LOAD Dec 16 12:08:17.431000 audit: BPF prog-id=174 op=LOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.431000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.431000 audit: BPF prog-id=175 op=LOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.431000 audit: BPF prog-id=176 op=LOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.431000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.431000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.431000 audit: BPF prog-id=177 op=LOAD Dec 16 12:08:17.431000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3928 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313066646236663565313064616339623035323832393764616130 Dec 16 12:08:17.433549 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:17.465383 containerd[1557]: time="2025-12-16T12:08:17.465344892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c68d78889-wl87r,Uid:ad89a778-3861-43c0-9fbf-a4ff7b7dedcf,Namespace:calico-system,Attempt:0,} returns sandbox id \"b310fdb6f5e10dac9b0528297daa0831de264f5c49e5f95318a3a8ae045f41f6\"" Dec 16 12:08:17.466826 containerd[1557]: time="2025-12-16T12:08:17.466797366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:08:17.557959 kubelet[2713]: I1216 12:08:17.557921 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61db773-a925-451f-85e2-1020f71a4c7e" path="/var/lib/kubelet/pods/a61db773-a925-451f-85e2-1020f71a4c7e/volumes" Dec 16 12:08:17.690272 containerd[1557]: time="2025-12-16T12:08:17.690148835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:17.695291 containerd[1557]: time="2025-12-16T12:08:17.695228534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:08:17.695461 containerd[1557]: time="2025-12-16T12:08:17.695278337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:17.696098 kubelet[2713]: E1216 12:08:17.696058 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:08:17.697056 kubelet[2713]: E1216 12:08:17.696242 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:08:17.700877 kubelet[2713]: E1216 12:08:17.700829 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:55f73bdb83b841a1b7d2492df2fc36c9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qwdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c68d78889-wl87r_calico-system(ad89a778-3861-43c0-9fbf-a4ff7b7dedcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:17.703895 containerd[1557]: time="2025-12-16T12:08:17.703866136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:08:17.707996 kubelet[2713]: E1216 12:08:17.707972 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:17.782000 audit: BPF prog-id=178 op=LOAD Dec 16 12:08:17.782000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf5e4db8 a2=98 a3=ffffcf5e4da8 items=0 ppid=3990 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:08:17.782000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:08:17.782000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcf5e4d88 a3=0 items=0 ppid=3990 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:08:17.782000 audit: BPF prog-id=179 op=LOAD Dec 16 12:08:17.782000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf5e4c68 a2=74 a3=95 items=0 ppid=3990 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:08:17.783000 audit: BPF prog-id=179 op=UNLOAD Dec 16 12:08:17.783000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3990 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:08:17.783000 audit: BPF prog-id=180 op=LOAD Dec 16 12:08:17.783000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf5e4c98 a2=40 a3=ffffcf5e4cc8 items=0 ppid=3990 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:08:17.783000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:08:17.783000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcf5e4cc8 items=0 ppid=3990 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:08:17.784000 audit: BPF prog-id=181 op=LOAD Dec 16 12:08:17.784000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebc12098 a2=98 a3=ffffebc12088 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.784000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:08:17.784000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffebc12068 a3=0 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.784000 audit: BPF prog-id=182 op=LOAD Dec 16 12:08:17.784000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffebc11d28 a2=74 a3=95 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.784000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:08:17.784000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.784000 audit: BPF prog-id=183 op=LOAD Dec 16 12:08:17.784000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffebc11d88 a2=94 a3=2 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.784000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:08:17.784000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.897000 audit: BPF prog-id=184 op=LOAD Dec 16 12:08:17.897000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffebc11d48 a2=40 a3=ffffebc11d78 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.897000 audit: BPF prog-id=184 op=UNLOAD Dec 16 12:08:17.897000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffebc11d78 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.907000 audit: BPF prog-id=185 op=LOAD Dec 16 12:08:17.907000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffebc11d58 a2=94 a3=4 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.907000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:08:17.907000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.907000 audit: BPF prog-id=186 op=LOAD Dec 16 12:08:17.907000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffebc11b98 a2=94 a3=5 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.907000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:08:17.907000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.907000 audit: BPF prog-id=187 op=LOAD Dec 16 12:08:17.907000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffebc11dc8 a2=94 a3=6 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.907000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:08:17.907000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.908000 audit: BPF prog-id=188 op=LOAD Dec 16 12:08:17.908000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffebc11598 a2=94 a3=83 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.908000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.908000 audit: BPF prog-id=189 op=LOAD Dec 16 12:08:17.908000 audit[4115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffebc11358 a2=94 a3=2 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.908000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.908000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:08:17.908000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.908000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.908000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:08:17.908000 audit[4115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3817620 a3=380ab00 items=0 ppid=3990 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.908000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:08:17.917382 containerd[1557]: time="2025-12-16T12:08:17.917308137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:17.921386 containerd[1557]: time="2025-12-16T12:08:17.921346864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:08:17.921456 containerd[1557]: time="2025-12-16T12:08:17.921425868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:17.922095 kubelet[2713]: E1216 12:08:17.921562 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:08:17.922095 kubelet[2713]: E1216 12:08:17.921606 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:08:17.920000 audit: BPF prog-id=190 op=LOAD Dec 16 12:08:17.920000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5d90528 a2=98 a3=fffff5d90518 items=0 ppid=3990 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:08:17.920000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:08:17.920000 audit[4124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff5d904f8 a3=0 items=0 ppid=3990 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:08:17.920000 audit: BPF prog-id=191 op=LOAD Dec 16 12:08:17.920000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5d903d8 a2=74 a3=95 items=0 ppid=3990 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:08:17.921000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:08:17.921000 audit[4124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3990 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:08:17.921000 audit: BPF prog-id=192 op=LOAD Dec 16 12:08:17.921000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5d90408 a2=40 a3=fffff5d90438 items=0 ppid=3990 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:08:17.921000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:08:17.921000 audit[4124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff5d90438 items=0 ppid=3990 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:08:17.922667 kubelet[2713]: E1216 12:08:17.921726 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c68d78889-wl87r_calico-system(ad89a778-3861-43c0-9fbf-a4ff7b7dedcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:17.923120 kubelet[2713]: E1216 12:08:17.922983 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c68d78889-wl87r" podUID="ad89a778-3861-43c0-9fbf-a4ff7b7dedcf" Dec 16 12:08:17.980801 systemd-networkd[1471]: vxlan.calico: Link UP Dec 16 12:08:17.980807 systemd-networkd[1471]: vxlan.calico: Gained carrier Dec 16 12:08:17.996000 audit: BPF prog-id=193 op=LOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb0066e8 a2=98 a3=fffffb0066d8 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffb0066b8 a3=0 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=194 op=LOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb0063c8 a2=74 a3=95 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=195 op=LOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb006428 a2=94 a3=2 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=196 op=LOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb0062a8 a2=40 a3=fffffb0062d8 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffffb0062d8 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=197 op=LOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb0063f8 a2=94 a3=b7 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.996000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:08:17.996000 audit[4152]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.997000 audit: BPF prog-id=198 op=LOAD Dec 16 12:08:17.997000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb005aa8 a2=94 a3=2 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.997000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:08:17.997000 audit[4152]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:17.997000 audit: BPF prog-id=199 op=LOAD Dec 16 12:08:17.997000 audit[4152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb005c38 a2=94 a3=30 items=0 ppid=3990 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:17.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:08:18.001000 audit: BPF prog-id=200 op=LOAD Dec 16 12:08:18.001000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdd97cca8 a2=98 a3=ffffdd97cc98 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.001000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:08:18.001000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdd97cc78 a3=0 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.001000 audit: BPF prog-id=201 op=LOAD Dec 16 12:08:18.001000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd97c938 a2=74 a3=95 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.001000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:08:18.001000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.001000 audit: BPF prog-id=202 op=LOAD Dec 16 12:08:18.001000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd97c998 a2=94 a3=2 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.001000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:08:18.001000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.097000 audit: BPF prog-id=203 op=LOAD Dec 16 12:08:18.097000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd97c958 a2=40 a3=ffffdd97c988 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.097000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:08:18.097000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdd97c988 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=204 op=LOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd97c968 a2=94 a3=4 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=205 op=LOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdd97c7a8 a2=94 a3=5 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=206 op=LOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd97c9d8 a2=94 a3=6 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.107000 audit: BPF prog-id=207 op=LOAD Dec 16 12:08:18.107000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd97c1a8 a2=94 a3=83 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.107000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.108000 audit: BPF prog-id=208 op=LOAD Dec 16 12:08:18.108000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdd97bf68 a2=94 a3=2 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.108000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:08:18.108000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.108000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:08:18.108000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=31717620 a3=3170ab00 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:08:18.118000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:08:18.118000 audit[3990]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000d66780 a2=0 a3=0 items=0 ppid=3971 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.118000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:08:18.159000 audit[4181]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4181 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:18.159000 audit[4181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffa03e040 a2=0 a3=ffffb635ffa8 items=0 ppid=3990 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:18.160000 audit[4182]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:18.160000 audit[4182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffffc70c720 a2=0 a3=ffffbefadfa8 items=0 ppid=3990 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.160000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:18.165000 audit[4180]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:18.165000 audit[4180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc76dc4d0 a2=0 a3=ffffbcd9efa8 items=0 ppid=3990 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.165000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:18.168000 audit[4184]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:18.168000 audit[4184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe1399030 a2=0 a3=ffffbeff2fa8 items=0 ppid=3990 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.168000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:18.709942 kubelet[2713]: E1216 12:08:18.709852 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:18.712071 kubelet[2713]: E1216 12:08:18.711890 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c68d78889-wl87r" podUID="ad89a778-3861-43c0-9fbf-a4ff7b7dedcf" Dec 16 12:08:18.732000 audit[4206]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:18.732000 audit[4206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffb5d2ee0 a2=0 a3=1 items=0 ppid=2826 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:18.738000 audit[4206]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:18.738000 audit[4206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffb5d2ee0 a2=0 a3=1 items=0 ppid=2826 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:18.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:19.187190 systemd-networkd[1471]: calif44a7b4edfa: Gained IPv6LL Dec 16 12:08:19.955201 systemd-networkd[1471]: vxlan.calico: Gained IPv6LL Dec 16 12:08:21.714538 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:52406.service - OpenSSH per-connection server daemon (10.0.0.1:52406). Dec 16 12:08:21.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.13:22-10.0.0.1:52406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:21.715720 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 16 12:08:21.715775 kernel: audit: type=1130 audit(1765886901.713:643): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.13:22-10.0.0.1:52406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:21.789000 audit[4229]: USER_ACCT pid=4229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.791098 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 52406 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:21.794035 kernel: audit: type=1101 audit(1765886901.789:644): pid=4229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.793000 audit[4229]: CRED_ACQ pid=4229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.795070 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:21.799727 kernel: audit: type=1103 audit(1765886901.793:645): pid=4229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.799797 kernel: audit: type=1006 audit(1765886901.793:646): pid=4229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 12:08:21.799814 kernel: audit: type=1300 audit(1765886901.793:646): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe6b9750 a2=3 a3=0 items=0 ppid=1 pid=4229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:21.793000 audit[4229]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe6b9750 a2=3 a3=0 items=0 ppid=1 pid=4229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:21.793000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:21.803624 systemd-logind[1531]: New session 9 of user core. Dec 16 12:08:21.804755 kernel: audit: type=1327 audit(1765886901.793:646): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:21.813276 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:08:21.815000 audit[4229]: USER_START pid=4229 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.817000 audit[4233]: CRED_ACQ pid=4233 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.823846 kernel: audit: type=1105 audit(1765886901.815:647): pid=4229 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.823898 kernel: audit: type=1103 audit(1765886901.817:648): pid=4233 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.947428 sshd[4233]: Connection closed by 10.0.0.1 port 52406 Dec 16 12:08:21.948057 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:21.948000 audit[4229]: USER_END pid=4229 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.952313 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:52406.service: Deactivated successfully. Dec 16 12:08:21.948000 audit[4229]: CRED_DISP pid=4229 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.954554 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:08:21.955383 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:08:21.956483 systemd-logind[1531]: Removed session 9. Dec 16 12:08:21.956804 kernel: audit: type=1106 audit(1765886901.948:649): pid=4229 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.956903 kernel: audit: type=1104 audit(1765886901.948:650): pid=4229 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:21.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.13:22-10.0.0.1:52406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:23.553047 kubelet[2713]: E1216 12:08:23.552981 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:23.553757 containerd[1557]: time="2025-12-16T12:08:23.553715967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2spcj,Uid:98d1e4ea-915a-4b83-837d-93550126bdaf,Namespace:kube-system,Attempt:0,}" Dec 16 12:08:23.682648 systemd-networkd[1471]: cali7d39040ac34: Link UP Dec 16 12:08:23.683299 systemd-networkd[1471]: cali7d39040ac34: Gained carrier Dec 16 12:08:23.701849 containerd[1557]: 2025-12-16 12:08:23.598 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2spcj-eth0 coredns-674b8bbfcf- kube-system 98d1e4ea-915a-4b83-837d-93550126bdaf 825 0 2025-12-16 12:07:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2spcj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d39040ac34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-" Dec 16 12:08:23.701849 containerd[1557]: 2025-12-16 12:08:23.598 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.701849 containerd[1557]: 2025-12-16 12:08:23.637 [INFO][4269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" HandleID="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Workload="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.637 [INFO][4269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" HandleID="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Workload="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ddda0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2spcj", "timestamp":"2025-12-16 12:08:23.637731523 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.637 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.638 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.638 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.648 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" host="localhost" Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.655 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.660 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.663 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.665 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:23.702295 containerd[1557]: 2025-12-16 12:08:23.665 [INFO][4269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" host="localhost" Dec 16 12:08:23.702876 containerd[1557]: 2025-12-16 12:08:23.666 [INFO][4269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4 Dec 16 12:08:23.702876 containerd[1557]: 2025-12-16 12:08:23.670 [INFO][4269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" host="localhost" Dec 16 12:08:23.702876 containerd[1557]: 2025-12-16 12:08:23.676 [INFO][4269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" host="localhost" Dec 16 12:08:23.702876 containerd[1557]: 2025-12-16 12:08:23.676 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" host="localhost" Dec 16 12:08:23.702876 containerd[1557]: 2025-12-16 12:08:23.676 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:23.702876 containerd[1557]: 2025-12-16 12:08:23.676 [INFO][4269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" HandleID="k8s-pod-network.d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Workload="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.703182 containerd[1557]: 2025-12-16 12:08:23.679 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2spcj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98d1e4ea-915a-4b83-837d-93550126bdaf", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2spcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d39040ac34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:23.703269 containerd[1557]: 2025-12-16 12:08:23.679 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.703269 containerd[1557]: 2025-12-16 12:08:23.679 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d39040ac34 ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.703269 containerd[1557]: 2025-12-16 12:08:23.683 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.703369 containerd[1557]: 2025-12-16 12:08:23.684 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2spcj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98d1e4ea-915a-4b83-837d-93550126bdaf", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4", Pod:"coredns-674b8bbfcf-2spcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d39040ac34", MAC:"1e:50:cf:4b:17:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:23.703369 containerd[1557]: 2025-12-16 12:08:23.698 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-2spcj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2spcj-eth0" Dec 16 12:08:23.716000 audit[4288]: NETFILTER_CFG table=filter:129 family=2 entries=42 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:23.716000 audit[4288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc23c06b0 a2=0 a3=ffff9f1e7fa8 items=0 ppid=3990 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.716000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:23.722906 containerd[1557]: time="2025-12-16T12:08:23.722866915Z" level=info msg="connecting to shim d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4" address="unix:///run/containerd/s/c5d9dee590a6fe3a313cfd27be6f7b19b62468b00efd0815b148b16308b37c70" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:23.754220 systemd[1]: Started cri-containerd-d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4.scope - libcontainer container d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4. Dec 16 12:08:23.765000 audit: BPF prog-id=209 op=LOAD Dec 16 12:08:23.765000 audit: BPF prog-id=210 op=LOAD Dec 16 12:08:23.765000 audit[4308]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.765000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:08:23.765000 audit[4308]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.765000 audit: BPF prog-id=211 op=LOAD Dec 16 12:08:23.765000 audit[4308]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.766000 audit: BPF prog-id=212 op=LOAD Dec 16 12:08:23.766000 audit[4308]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.766000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:08:23.766000 audit[4308]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.766000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:08:23.766000 audit[4308]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.766000 audit: BPF prog-id=213 op=LOAD Dec 16 12:08:23.766000 audit[4308]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4296 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663964346132333763396336643663393264393838393163643235 Dec 16 12:08:23.768172 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:23.792647 containerd[1557]: time="2025-12-16T12:08:23.792572601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2spcj,Uid:98d1e4ea-915a-4b83-837d-93550126bdaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4\"" Dec 16 12:08:23.793517 kubelet[2713]: E1216 12:08:23.793493 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:23.798066 containerd[1557]: time="2025-12-16T12:08:23.797969338Z" level=info msg="CreateContainer within sandbox \"d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:08:23.808307 containerd[1557]: time="2025-12-16T12:08:23.808260716Z" level=info msg="Container 4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:08:23.809545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373051417.mount: Deactivated successfully. Dec 16 12:08:23.814855 containerd[1557]: time="2025-12-16T12:08:23.814802210Z" level=info msg="CreateContainer within sandbox \"d5f9d4a237c9c6d6c92d98891cd25735bcdb57bc62c506688efcd13da99ce8d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9\"" Dec 16 12:08:23.815308 containerd[1557]: time="2025-12-16T12:08:23.815284266Z" level=info msg="StartContainer for \"4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9\"" Dec 16 12:08:23.816314 containerd[1557]: time="2025-12-16T12:08:23.816286459Z" level=info msg="connecting to shim 4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9" address="unix:///run/containerd/s/c5d9dee590a6fe3a313cfd27be6f7b19b62468b00efd0815b148b16308b37c70" protocol=ttrpc version=3 Dec 16 12:08:23.842257 systemd[1]: Started cri-containerd-4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9.scope - libcontainer container 4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9. Dec 16 12:08:23.852000 audit: BPF prog-id=214 op=LOAD Dec 16 12:08:23.852000 audit: BPF prog-id=215 op=LOAD Dec 16 12:08:23.852000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.852000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:08:23.852000 audit[4334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.852000 audit: BPF prog-id=216 op=LOAD Dec 16 12:08:23.852000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.853000 audit: BPF prog-id=217 op=LOAD Dec 16 12:08:23.853000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.853000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:08:23.853000 audit[4334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.853000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:08:23.853000 audit[4334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.853000 audit: BPF prog-id=218 op=LOAD Dec 16 12:08:23.853000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4296 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:23.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313864663639383532333837316436343162343663376137363238 Dec 16 12:08:23.877634 containerd[1557]: time="2025-12-16T12:08:23.877595950Z" level=info msg="StartContainer for \"4318df698523871d641b46c7a762839b344017f397651096223391b332b8d4f9\" returns successfully" Dec 16 12:08:24.728707 kubelet[2713]: E1216 12:08:24.728513 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:24.747927 kubelet[2713]: I1216 12:08:24.747273 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2spcj" podStartSLOduration=35.747256641999996 podStartE2EDuration="35.747256642s" podCreationTimestamp="2025-12-16 12:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:08:24.746515699 +0000 UTC m=+41.362917110" watchObservedRunningTime="2025-12-16 12:08:24.747256642 +0000 UTC m=+41.363658053" Dec 16 12:08:24.757000 audit[4370]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:24.757000 audit[4370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffda7d0890 a2=0 a3=1 items=0 ppid=2826 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:24.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:24.762000 audit[4370]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:24.762000 audit[4370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffda7d0890 a2=0 a3=1 items=0 ppid=2826 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:24.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:24.784000 audit[4372]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:24.784000 audit[4372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd07e4b30 a2=0 a3=1 items=0 ppid=2826 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:24.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:24.796000 audit[4372]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:24.796000 audit[4372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd07e4b30 a2=0 a3=1 items=0 ppid=2826 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:24.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:25.523147 systemd-networkd[1471]: cali7d39040ac34: Gained IPv6LL Dec 16 12:08:25.553557 containerd[1557]: time="2025-12-16T12:08:25.553502563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fx4jj,Uid:8b37b908-ae1e-416d-92b4-0b4c5064435f,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:25.651081 systemd-networkd[1471]: cali93eb3cbc3a6: Link UP Dec 16 12:08:25.651585 systemd-networkd[1471]: cali93eb3cbc3a6: Gained carrier Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.590 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fx4jj-eth0 csi-node-driver- calico-system 8b37b908-ae1e-416d-92b4-0b4c5064435f 737 0 2025-12-16 12:08:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fx4jj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali93eb3cbc3a6 [] [] }} ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.590 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.613 [INFO][4388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" HandleID="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Workload="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.614 [INFO][4388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" HandleID="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Workload="localhost-k8s-csi--node--driver--fx4jj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fx4jj", "timestamp":"2025-12-16 12:08:25.613873116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.614 [INFO][4388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.614 [INFO][4388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.614 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.624 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.628 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.632 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.634 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.636 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.636 [INFO][4388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.637 [INFO][4388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853 Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.640 [INFO][4388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.646 [INFO][4388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.646 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" host="localhost" Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.646 [INFO][4388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:25.665306 containerd[1557]: 2025-12-16 12:08:25.646 [INFO][4388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" HandleID="k8s-pod-network.4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Workload="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.665887 containerd[1557]: 2025-12-16 12:08:25.649 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fx4jj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b37b908-ae1e-416d-92b4-0b4c5064435f", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fx4jj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93eb3cbc3a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:25.665887 containerd[1557]: 2025-12-16 12:08:25.649 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.665887 containerd[1557]: 2025-12-16 12:08:25.649 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93eb3cbc3a6 ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.665887 containerd[1557]: 2025-12-16 12:08:25.652 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.665887 containerd[1557]: 2025-12-16 12:08:25.652 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fx4jj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b37b908-ae1e-416d-92b4-0b4c5064435f", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853", Pod:"csi-node-driver-fx4jj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93eb3cbc3a6", MAC:"76:8b:d9:79:ae:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:25.665887 containerd[1557]: 2025-12-16 12:08:25.662 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" Namespace="calico-system" Pod="csi-node-driver-fx4jj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fx4jj-eth0" Dec 16 12:08:25.672000 audit[4404]: NETFILTER_CFG table=filter:134 family=2 entries=40 op=nft_register_chain pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:25.672000 audit[4404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffc7390fe0 a2=0 a3=ffffabdc2fa8 items=0 ppid=3990 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:25.690566 containerd[1557]: time="2025-12-16T12:08:25.690524695Z" level=info msg="connecting to shim 4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853" address="unix:///run/containerd/s/13f6b0b0764e9a9bd79873f6e6a0dded6c0984e4e9ff46a85135ce5b6c0564b4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:25.715199 systemd[1]: Started cri-containerd-4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853.scope - libcontainer container 4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853. Dec 16 12:08:25.725000 audit: BPF prog-id=219 op=LOAD Dec 16 12:08:25.726000 audit: BPF prog-id=220 op=LOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.726000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.726000 audit: BPF prog-id=221 op=LOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.726000 audit: BPF prog-id=222 op=LOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.726000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.726000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.726000 audit: BPF prog-id=223 op=LOAD Dec 16 12:08:25.726000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4414 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:25.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466366465623761353232346231366139366261393266303465653066 Dec 16 12:08:25.728081 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:25.730614 kubelet[2713]: E1216 12:08:25.730591 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:25.743797 containerd[1557]: time="2025-12-16T12:08:25.743758867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fx4jj,Uid:8b37b908-ae1e-416d-92b4-0b4c5064435f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f6deb7a5224b16a96ba92f04ee0ff504373112388c9bcd61725be9e99add853\"" Dec 16 12:08:25.745324 containerd[1557]: time="2025-12-16T12:08:25.745245713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:08:25.965903 containerd[1557]: time="2025-12-16T12:08:25.965842678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:25.966888 containerd[1557]: time="2025-12-16T12:08:25.966804108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:08:25.966945 containerd[1557]: time="2025-12-16T12:08:25.966885790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:25.967339 kubelet[2713]: E1216 12:08:25.967083 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:08:25.967339 kubelet[2713]: E1216 12:08:25.967137 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:08:25.967339 kubelet[2713]: E1216 12:08:25.967281 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26t9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:25.968993 containerd[1557]: time="2025-12-16T12:08:25.968968415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:08:26.185635 containerd[1557]: time="2025-12-16T12:08:26.185444617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:26.186409 containerd[1557]: time="2025-12-16T12:08:26.186317603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:08:26.186850 containerd[1557]: time="2025-12-16T12:08:26.186529290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:26.186902 kubelet[2713]: E1216 12:08:26.186603 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:08:26.186902 kubelet[2713]: E1216 12:08:26.186645 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:08:26.186902 kubelet[2713]: E1216 12:08:26.186773 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26t9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:26.187982 kubelet[2713]: E1216 12:08:26.187945 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:26.552825 containerd[1557]: time="2025-12-16T12:08:26.552779506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2rr6k,Uid:cce57e8e-9a64-42b6-8b26-7e3f06d75548,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:26.553115 containerd[1557]: time="2025-12-16T12:08:26.552787306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6dbc8bbd-fps4t,Uid:bc6dd9ec-9c70-48ca-8951-46abf4919f50,Namespace:calico-system,Attempt:0,}" Dec 16 12:08:26.670033 systemd-networkd[1471]: calibd8d706f23c: Link UP Dec 16 12:08:26.670560 systemd-networkd[1471]: calibd8d706f23c: Gained carrier Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.599 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0 calico-kube-controllers-5d6dbc8bbd- calico-system bc6dd9ec-9c70-48ca-8951-46abf4919f50 829 0 2025-12-16 12:08:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d6dbc8bbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d6dbc8bbd-fps4t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibd8d706f23c [] [] }} ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.599 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.626 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" HandleID="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Workload="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.626 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" HandleID="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Workload="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d6dbc8bbd-fps4t", "timestamp":"2025-12-16 12:08:26.626325046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.626 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.626 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.626 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.635 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.641 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.645 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.647 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.649 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.649 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.651 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.654 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.659 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.659 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" host="localhost" Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.659 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:26.685610 containerd[1557]: 2025-12-16 12:08:26.659 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" HandleID="k8s-pod-network.8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Workload="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.686765 containerd[1557]: 2025-12-16 12:08:26.666 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0", GenerateName:"calico-kube-controllers-5d6dbc8bbd-", Namespace:"calico-system", SelfLink:"", UID:"bc6dd9ec-9c70-48ca-8951-46abf4919f50", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6dbc8bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d6dbc8bbd-fps4t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd8d706f23c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:26.686765 containerd[1557]: 2025-12-16 12:08:26.666 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.686765 containerd[1557]: 2025-12-16 12:08:26.666 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd8d706f23c ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.686765 containerd[1557]: 2025-12-16 12:08:26.670 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.686765 containerd[1557]: 2025-12-16 12:08:26.671 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0", GenerateName:"calico-kube-controllers-5d6dbc8bbd-", Namespace:"calico-system", SelfLink:"", UID:"bc6dd9ec-9c70-48ca-8951-46abf4919f50", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6dbc8bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe", Pod:"calico-kube-controllers-5d6dbc8bbd-fps4t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd8d706f23c", MAC:"d6:cb:8a:27:72:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:26.686765 containerd[1557]: 2025-12-16 12:08:26.683 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" Namespace="calico-system" Pod="calico-kube-controllers-5d6dbc8bbd-fps4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d6dbc8bbd--fps4t-eth0" Dec 16 12:08:26.694000 audit[4506]: NETFILTER_CFG table=filter:135 family=2 entries=44 op=nft_register_chain pid=4506 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:26.694000 audit[4506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21952 a0=3 a1=fffffd7cfee0 a2=0 a3=ffff9151efa8 items=0 ppid=3990 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.694000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:26.710174 containerd[1557]: time="2025-12-16T12:08:26.710138737Z" level=info msg="connecting to shim 8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe" address="unix:///run/containerd/s/7eda995022ee595c812e8ba128054b1483f79b7d07b749f563c74e77e6c4499c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:26.734126 kubelet[2713]: E1216 12:08:26.734070 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:26.738167 kubelet[2713]: E1216 12:08:26.738116 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:26.752169 systemd[1]: Started cri-containerd-8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe.scope - libcontainer container 8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe. Dec 16 12:08:26.777000 audit: BPF prog-id=224 op=LOAD Dec 16 12:08:26.779316 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 16 12:08:26.779381 kernel: audit: type=1334 audit(1765886906.777:683): prog-id=224 op=LOAD Dec 16 12:08:26.779000 audit: BPF prog-id=225 op=LOAD Dec 16 12:08:26.781950 kernel: audit: type=1334 audit(1765886906.779:684): prog-id=225 op=LOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.786871 kernel: audit: type=1300 audit(1765886906.779:684): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.788027 systemd-networkd[1471]: cali444223cc15a: Link UP Dec 16 12:08:26.788226 systemd-networkd[1471]: cali444223cc15a: Gained carrier Dec 16 12:08:26.792823 kernel: audit: type=1327 audit(1765886906.779:684): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.789785 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:26.779000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:08:26.795087 kernel: audit: type=1334 audit(1765886906.779:685): prog-id=225 op=UNLOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.800032 kernel: audit: type=1300 audit(1765886906.779:685): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.811383 kernel: audit: type=1327 audit(1765886906.779:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.811470 kernel: audit: type=1334 audit(1765886906.779:686): prog-id=226 op=LOAD Dec 16 12:08:26.811489 kernel: audit: type=1300 audit(1765886906.779:686): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.779000 audit: BPF prog-id=226 op=LOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.596 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--2rr6k-eth0 goldmane-666569f655- calico-system cce57e8e-9a64-42b6-8b26-7e3f06d75548 836 0 2025-12-16 12:08:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-2rr6k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali444223cc15a [] [] }} ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.596 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.627 [INFO][4479] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" HandleID="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Workload="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.627 [INFO][4479] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" HandleID="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Workload="localhost-k8s-goldmane--666569f655--2rr6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b12f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-2rr6k", "timestamp":"2025-12-16 12:08:26.627197593 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.627 [INFO][4479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.659 [INFO][4479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.659 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.738 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.747 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.756 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.759 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.763 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.763 [INFO][4479] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.765 [INFO][4479] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7 Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.770 [INFO][4479] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.775 [INFO][4479] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.775 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" host="localhost" Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.775 [INFO][4479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:26.811904 containerd[1557]: 2025-12-16 12:08:26.775 [INFO][4479] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" HandleID="k8s-pod-network.383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Workload="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.812381 containerd[1557]: 2025-12-16 12:08:26.780 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2rr6k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cce57e8e-9a64-42b6-8b26-7e3f06d75548", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-2rr6k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali444223cc15a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:26.812381 containerd[1557]: 2025-12-16 12:08:26.780 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.812381 containerd[1557]: 2025-12-16 12:08:26.780 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali444223cc15a ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.812381 containerd[1557]: 2025-12-16 12:08:26.785 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.812381 containerd[1557]: 2025-12-16 12:08:26.788 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2rr6k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cce57e8e-9a64-42b6-8b26-7e3f06d75548", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7", Pod:"goldmane-666569f655-2rr6k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali444223cc15a", MAC:"d6:6c:e9:e7:18:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:26.812381 containerd[1557]: 2025-12-16 12:08:26.800 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" Namespace="calico-system" Pod="goldmane-666569f655-2rr6k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2rr6k-eth0" Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.817528 kernel: audit: type=1327 audit(1765886906.779:686): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.779000 audit: BPF prog-id=227 op=LOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.779000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.779000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.779000 audit: BPF prog-id=228 op=LOAD Dec 16 12:08:26.779000 audit[4526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4515 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616665353366383761363333383238623265383638303631356236 Dec 16 12:08:26.816000 audit[4554]: NETFILTER_CFG table=filter:136 family=2 entries=56 op=nft_register_chain pid=4554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:26.816000 audit[4554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28744 a0=3 a1=ffffe34a00d0 a2=0 a3=ffff9c6f9fa8 items=0 ppid=3990 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.816000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:26.830557 containerd[1557]: time="2025-12-16T12:08:26.830500210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6dbc8bbd-fps4t,Uid:bc6dd9ec-9c70-48ca-8951-46abf4919f50,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bafe53f87a633828b2e8680615b6de749bb254d3f320fa015abedb4db557ebe\"" Dec 16 12:08:26.835330 containerd[1557]: time="2025-12-16T12:08:26.835295515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:08:26.840207 containerd[1557]: time="2025-12-16T12:08:26.840172102Z" level=info msg="connecting to shim 383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7" address="unix:///run/containerd/s/685c49620e53d8b4925ef772f525b20ccaa785be4bdfdd12ce3178f6e8392c5e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:26.868250 systemd[1]: Started cri-containerd-383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7.scope - libcontainer container 383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7. Dec 16 12:08:26.877000 audit: BPF prog-id=229 op=LOAD Dec 16 12:08:26.878000 audit: BPF prog-id=230 op=LOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.878000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.878000 audit: BPF prog-id=231 op=LOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.878000 audit: BPF prog-id=232 op=LOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.878000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.878000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.878000 audit: BPF prog-id=233 op=LOAD Dec 16 12:08:26.878000 audit[4579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4569 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:26.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336134616461363232613762333735386566356163393536383930 Dec 16 12:08:26.880279 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:26.906790 containerd[1557]: time="2025-12-16T12:08:26.906739991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2rr6k,Uid:cce57e8e-9a64-42b6-8b26-7e3f06d75548,Namespace:calico-system,Attempt:0,} returns sandbox id \"383a4ada622a7b3758ef5ac95689093ce2d316bd7f3bb68b0a4d2b3c961290d7\"" Dec 16 12:08:26.960314 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:52410.service - OpenSSH per-connection server daemon (10.0.0.1:52410). Dec 16 12:08:26.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.13:22-10.0.0.1:52410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:27.052000 audit[4606]: USER_ACCT pid=4606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:27.053337 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 52410 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:27.053000 audit[4606]: CRED_ACQ pid=4606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:27.053000 audit[4606]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcea618d0 a2=3 a3=0 items=0 ppid=1 pid=4606 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:27.055353 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:27.060091 systemd-logind[1531]: New session 10 of user core. Dec 16 12:08:27.064624 containerd[1557]: time="2025-12-16T12:08:27.064537223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:27.065767 containerd[1557]: time="2025-12-16T12:08:27.065731378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:08:27.065855 containerd[1557]: time="2025-12-16T12:08:27.065819381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:27.066054 kubelet[2713]: E1216 12:08:27.065993 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:08:27.066104 kubelet[2713]: E1216 12:08:27.066068 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:08:27.066333 kubelet[2713]: E1216 12:08:27.066285 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnh8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6dbc8bbd-fps4t_calico-system(bc6dd9ec-9c70-48ca-8951-46abf4919f50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:27.067176 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:08:27.067955 kubelet[2713]: E1216 12:08:27.067919 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:08:27.069000 audit[4606]: USER_START pid=4606 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:27.071908 containerd[1557]: time="2025-12-16T12:08:27.071854198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:08:27.071000 audit[4610]: CRED_ACQ pid=4610 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:27.211461 sshd[4610]: Connection closed by 10.0.0.1 port 52410 Dec 16 12:08:27.211752 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:27.211000 audit[4606]: USER_END pid=4606 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:27.212000 audit[4606]: CRED_DISP pid=4606 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:27.215907 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:52410.service: Deactivated successfully. Dec 16 12:08:27.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.13:22-10.0.0.1:52410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:27.217707 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:08:27.218429 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:08:27.219228 systemd-logind[1531]: Removed session 10. Dec 16 12:08:27.251262 systemd-networkd[1471]: cali93eb3cbc3a6: Gained IPv6LL Dec 16 12:08:27.288457 containerd[1557]: time="2025-12-16T12:08:27.288400399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:27.289535 containerd[1557]: time="2025-12-16T12:08:27.289493911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:08:27.289620 containerd[1557]: time="2025-12-16T12:08:27.289576033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:27.289762 kubelet[2713]: E1216 12:08:27.289724 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:08:27.289807 kubelet[2713]: E1216 12:08:27.289774 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:08:27.289985 kubelet[2713]: E1216 12:08:27.289928 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crn5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2rr6k_calico-system(cce57e8e-9a64-42b6-8b26-7e3f06d75548): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:27.291098 kubelet[2713]: E1216 12:08:27.291065 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:08:27.553509 containerd[1557]: time="2025-12-16T12:08:27.553305419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-g5j9p,Uid:35f02a41-1704-43ba-9b8e-82676bfbc14d,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:08:27.553615 kubelet[2713]: E1216 12:08:27.553325 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:27.553808 containerd[1557]: time="2025-12-16T12:08:27.553701591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-jgs6w,Uid:0ff6ef5a-c884-4b41-929a-344b8f5bf998,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:08:27.554618 containerd[1557]: time="2025-12-16T12:08:27.554569736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fp6fd,Uid:cc2d4454-3b99-4f0b-9a73-a20874110667,Namespace:kube-system,Attempt:0,}" Dec 16 12:08:27.675873 systemd-networkd[1471]: cali3bc43c69934: Link UP Dec 16 12:08:27.678382 systemd-networkd[1471]: cali3bc43c69934: Gained carrier Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.611 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0 calico-apiserver-7f5b89cfdd- calico-apiserver 35f02a41-1704-43ba-9b8e-82676bfbc14d 835 0 2025-12-16 12:07:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5b89cfdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5b89cfdd-g5j9p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3bc43c69934 [] [] }} ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.611 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.636 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" HandleID="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Workload="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.636 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" HandleID="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Workload="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5b89cfdd-g5j9p", "timestamp":"2025-12-16 12:08:27.636364859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.636 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.637 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.637 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.647 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.651 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.656 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.657 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.659 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.659 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.660 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395 Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.664 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.669 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.669 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" host="localhost" Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.669 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:27.686641 containerd[1557]: 2025-12-16 12:08:27.669 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" HandleID="k8s-pod-network.39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Workload="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.687518 containerd[1557]: 2025-12-16 12:08:27.671 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0", GenerateName:"calico-apiserver-7f5b89cfdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"35f02a41-1704-43ba-9b8e-82676bfbc14d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5b89cfdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5b89cfdd-g5j9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bc43c69934", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:27.687518 containerd[1557]: 2025-12-16 12:08:27.672 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.687518 containerd[1557]: 2025-12-16 12:08:27.673 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bc43c69934 ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.687518 containerd[1557]: 2025-12-16 12:08:27.676 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.687518 containerd[1557]: 2025-12-16 12:08:27.676 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0", GenerateName:"calico-apiserver-7f5b89cfdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"35f02a41-1704-43ba-9b8e-82676bfbc14d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5b89cfdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395", Pod:"calico-apiserver-7f5b89cfdd-g5j9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bc43c69934", MAC:"2e:dd:a1:cd:63:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:27.687518 containerd[1557]: 2025-12-16 12:08:27.684 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-g5j9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--g5j9p-eth0" Dec 16 12:08:27.697000 audit[4710]: NETFILTER_CFG table=filter:137 family=2 entries=66 op=nft_register_chain pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:27.697000 audit[4710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32960 a0=3 a1=ffffdcf18f20 a2=0 a3=ffff814dbfa8 items=0 ppid=3990 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.697000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:27.712187 containerd[1557]: time="2025-12-16T12:08:27.712149365Z" level=info msg="connecting to shim 39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395" address="unix:///run/containerd/s/9e6dee9bc7d2df5db12cd1dc72ccb04a106c817a6adb14f568f2bf2220979bbf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:27.736523 kubelet[2713]: E1216 12:08:27.736476 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:08:27.740063 kubelet[2713]: E1216 12:08:27.740032 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:08:27.741415 kubelet[2713]: E1216 12:08:27.741382 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:27.745235 systemd[1]: Started cri-containerd-39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395.scope - libcontainer container 39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395. Dec 16 12:08:27.762000 audit: BPF prog-id=234 op=LOAD Dec 16 12:08:27.763000 audit: BPF prog-id=235 op=LOAD Dec 16 12:08:27.763000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.763000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:08:27.763000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.763000 audit: BPF prog-id=236 op=LOAD Dec 16 12:08:27.763000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.763000 audit: BPF prog-id=237 op=LOAD Dec 16 12:08:27.763000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.763000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:08:27.763000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.763000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:08:27.763000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.764000 audit: BPF prog-id=238 op=LOAD Dec 16 12:08:27.764000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4718 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339653433366237333066316431343833393063613561353634376536 Dec 16 12:08:27.766142 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:27.796996 containerd[1557]: time="2025-12-16T12:08:27.796919935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-g5j9p,Uid:35f02a41-1704-43ba-9b8e-82676bfbc14d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"39e436b730f1d148390ca5a5647e66d6b3849a7e38fe9c35b6ee000159246395\"" Dec 16 12:08:27.799000 audit[4759]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:27.799000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd24ed5f0 a2=0 a3=1 items=0 ppid=2826 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:27.802113 containerd[1557]: time="2025-12-16T12:08:27.801191620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:08:27.809000 audit[4759]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:27.809000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd24ed5f0 a2=0 a3=1 items=0 ppid=2826 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:27.817122 systemd-networkd[1471]: cali7406992a0ee: Link UP Dec 16 12:08:27.818184 systemd-networkd[1471]: cali7406992a0ee: Gained carrier Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.604 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0 coredns-674b8bbfcf- kube-system cc2d4454-3b99-4f0b-9a73-a20874110667 833 0 2025-12-16 12:07:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fp6fd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7406992a0ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.604 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.636 [INFO][4677] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" HandleID="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Workload="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.637 [INFO][4677] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" HandleID="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Workload="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fp6fd", "timestamp":"2025-12-16 12:08:27.636364099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.637 [INFO][4677] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.670 [INFO][4677] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.670 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.751 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.764 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.787 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.791 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.793 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.793 [INFO][4677] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.795 [INFO][4677] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.799 [INFO][4677] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.808 [INFO][4677] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.809 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" host="localhost" Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.809 [INFO][4677] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:27.839801 containerd[1557]: 2025-12-16 12:08:27.809 [INFO][4677] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" HandleID="k8s-pod-network.ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Workload="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.840338 containerd[1557]: 2025-12-16 12:08:27.811 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc2d4454-3b99-4f0b-9a73-a20874110667", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fp6fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406992a0ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:27.840338 containerd[1557]: 2025-12-16 12:08:27.812 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.840338 containerd[1557]: 2025-12-16 12:08:27.812 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7406992a0ee ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.840338 containerd[1557]: 2025-12-16 12:08:27.818 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.840338 containerd[1557]: 2025-12-16 12:08:27.820 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc2d4454-3b99-4f0b-9a73-a20874110667", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f", Pod:"coredns-674b8bbfcf-fp6fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406992a0ee", MAC:"06:1d:95:ba:99:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:27.840338 containerd[1557]: 2025-12-16 12:08:27.836 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fp6fd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fp6fd-eth0" Dec 16 12:08:27.852000 audit[4770]: NETFILTER_CFG table=filter:140 family=2 entries=58 op=nft_register_chain pid=4770 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:27.852000 audit[4770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26760 a0=3 a1=ffffcf1b8750 a2=0 a3=ffff9d24afa8 items=0 ppid=3990 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.852000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:27.859118 containerd[1557]: time="2025-12-16T12:08:27.859082921Z" level=info msg="connecting to shim ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f" address="unix:///run/containerd/s/ad42a5bf9478fdc12409dc611145e875b1c9d205fbc4a5b40532086829423fd8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:27.890246 systemd[1]: Started cri-containerd-ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f.scope - libcontainer container ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f. Dec 16 12:08:27.905000 audit: BPF prog-id=239 op=LOAD Dec 16 12:08:27.906000 audit: BPF prog-id=240 op=LOAD Dec 16 12:08:27.906000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.907000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:08:27.907000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.907000 audit: BPF prog-id=241 op=LOAD Dec 16 12:08:27.907000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.907000 audit: BPF prog-id=242 op=LOAD Dec 16 12:08:27.907000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.907000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:08:27.907000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.907000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:08:27.907000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.907000 audit: BPF prog-id=243 op=LOAD Dec 16 12:08:27.907000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666396134663832353062383538643733373665393032646461623665 Dec 16 12:08:27.909788 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:27.909860 systemd-networkd[1471]: cali220adb729a6: Link UP Dec 16 12:08:27.910074 systemd-networkd[1471]: cali220adb729a6: Gained carrier Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.605 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0 calico-apiserver-7f5b89cfdd- calico-apiserver 0ff6ef5a-c884-4b41-929a-344b8f5bf998 831 0 2025-12-16 12:07:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5b89cfdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5b89cfdd-jgs6w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali220adb729a6 [] [] }} ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.605 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.644 [INFO][4676] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" HandleID="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Workload="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.644 [INFO][4676] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" HandleID="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Workload="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5b89cfdd-jgs6w", "timestamp":"2025-12-16 12:08:27.644677943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.644 [INFO][4676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.809 [INFO][4676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.809 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.849 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.867 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.883 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.884 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.887 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.887 [INFO][4676] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.888 [INFO][4676] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6 Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.893 [INFO][4676] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.900 [INFO][4676] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.902 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" host="localhost" Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.902 [INFO][4676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:08:27.928858 containerd[1557]: 2025-12-16 12:08:27.902 [INFO][4676] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" HandleID="k8s-pod-network.15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Workload="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.929412 containerd[1557]: 2025-12-16 12:08:27.904 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0", GenerateName:"calico-apiserver-7f5b89cfdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ff6ef5a-c884-4b41-929a-344b8f5bf998", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5b89cfdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5b89cfdd-jgs6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali220adb729a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:27.929412 containerd[1557]: 2025-12-16 12:08:27.904 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.929412 containerd[1557]: 2025-12-16 12:08:27.904 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali220adb729a6 ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.929412 containerd[1557]: 2025-12-16 12:08:27.911 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.929412 containerd[1557]: 2025-12-16 12:08:27.913 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0", GenerateName:"calico-apiserver-7f5b89cfdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ff6ef5a-c884-4b41-929a-344b8f5bf998", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5b89cfdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6", Pod:"calico-apiserver-7f5b89cfdd-jgs6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali220adb729a6", MAC:"46:f7:af:18:78:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:08:27.929412 containerd[1557]: 2025-12-16 12:08:27.924 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" Namespace="calico-apiserver" Pod="calico-apiserver-7f5b89cfdd-jgs6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5b89cfdd--jgs6w-eth0" Dec 16 12:08:27.942665 containerd[1557]: time="2025-12-16T12:08:27.942615014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fp6fd,Uid:cc2d4454-3b99-4f0b-9a73-a20874110667,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f\"" Dec 16 12:08:27.943859 kubelet[2713]: E1216 12:08:27.943818 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:27.946686 containerd[1557]: time="2025-12-16T12:08:27.946649053Z" level=info msg="CreateContainer within sandbox \"ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:08:27.947000 audit[4824]: NETFILTER_CFG table=filter:141 family=2 entries=57 op=nft_register_chain pid=4824 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:08:27.947000 audit[4824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27812 a0=3 a1=ffffd0b41000 a2=0 a3=ffffbc026fa8 items=0 ppid=3990 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:27.947000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:08:27.955731 containerd[1557]: time="2025-12-16T12:08:27.955417190Z" level=info msg="Container cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:08:27.957656 containerd[1557]: time="2025-12-16T12:08:27.957528572Z" level=info msg="connecting to shim 15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6" address="unix:///run/containerd/s/2b41d57113da86689c57a97e92c1f61b53c51bdf3473b91cdbb576cdb675ccc3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:08:27.962008 containerd[1557]: time="2025-12-16T12:08:27.961939062Z" level=info msg="CreateContainer within sandbox \"ff9a4f8250b858d7376e902ddab6edc1502786af729e1f45261e0693f6a1448f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300\"" Dec 16 12:08:27.962625 containerd[1557]: time="2025-12-16T12:08:27.962586041Z" level=info msg="StartContainer for \"cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300\"" Dec 16 12:08:27.963451 containerd[1557]: time="2025-12-16T12:08:27.963422625Z" level=info msg="connecting to shim cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300" address="unix:///run/containerd/s/ad42a5bf9478fdc12409dc611145e875b1c9d205fbc4a5b40532086829423fd8" protocol=ttrpc version=3 Dec 16 12:08:27.983245 systemd[1]: Started cri-containerd-cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300.scope - libcontainer container cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300. Dec 16 12:08:27.997199 systemd[1]: Started cri-containerd-15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6.scope - libcontainer container 15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6. Dec 16 12:08:28.002000 audit: BPF prog-id=244 op=LOAD Dec 16 12:08:28.002000 audit: BPF prog-id=245 op=LOAD Dec 16 12:08:28.002000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.003000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:08:28.003000 audit[4846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.003000 audit: BPF prog-id=246 op=LOAD Dec 16 12:08:28.003000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.003000 audit: BPF prog-id=247 op=LOAD Dec 16 12:08:28.003000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.003000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:08:28.003000 audit[4846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.003000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:08:28.003000 audit[4846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.003000 audit: BPF prog-id=248 op=LOAD Dec 16 12:08:28.003000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4779 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363313764626165626230333438663337363438666333616632346263 Dec 16 12:08:28.028266 containerd[1557]: time="2025-12-16T12:08:28.028230507Z" level=info msg="StartContainer for \"cc17dbaebb0348f37648fc3af24bc02c27049b9e18408b8dd6760d78b24a8300\" returns successfully" Dec 16 12:08:28.031000 audit: BPF prog-id=249 op=LOAD Dec 16 12:08:28.031000 audit: BPF prog-id=250 op=LOAD Dec 16 12:08:28.031000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.031000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:08:28.031000 audit[4859]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.032000 audit: BPF prog-id=251 op=LOAD Dec 16 12:08:28.032000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.032000 audit: BPF prog-id=252 op=LOAD Dec 16 12:08:28.032000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.032000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:08:28.032000 audit[4859]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.032000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:08:28.032000 audit[4859]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.032000 audit: BPF prog-id=253 op=LOAD Dec 16 12:08:28.032000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4834 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613731646339333739303138353764363230356236353739373935 Dec 16 12:08:28.035042 systemd-resolved[1254]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:08:28.050071 containerd[1557]: time="2025-12-16T12:08:28.048995501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:28.056348 containerd[1557]: time="2025-12-16T12:08:28.056247468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:08:28.056741 containerd[1557]: time="2025-12-16T12:08:28.056306590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:28.056913 kubelet[2713]: E1216 12:08:28.056869 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:28.056963 kubelet[2713]: E1216 12:08:28.056925 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:28.057304 kubelet[2713]: E1216 12:08:28.057255 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94n8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5b89cfdd-g5j9p_calico-apiserver(35f02a41-1704-43ba-9b8e-82676bfbc14d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:28.058494 kubelet[2713]: E1216 12:08:28.058437 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:08:28.069463 containerd[1557]: time="2025-12-16T12:08:28.069112276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5b89cfdd-jgs6w,Uid:0ff6ef5a-c884-4b41-929a-344b8f5bf998,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"15a71dc937901857d6205b6579795293f1367b3402c6194c34f85805b4d0a7e6\"" Dec 16 12:08:28.074742 containerd[1557]: time="2025-12-16T12:08:28.074696595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:08:28.290654 containerd[1557]: time="2025-12-16T12:08:28.289986069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:28.291257 containerd[1557]: time="2025-12-16T12:08:28.291224264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:08:28.291257 containerd[1557]: time="2025-12-16T12:08:28.291283546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:28.291482 kubelet[2713]: E1216 12:08:28.291429 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:28.291534 kubelet[2713]: E1216 12:08:28.291486 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:28.291669 kubelet[2713]: E1216 12:08:28.291627 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl7zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5b89cfdd-jgs6w_calico-apiserver(0ff6ef5a-c884-4b41-929a-344b8f5bf998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:28.293310 kubelet[2713]: E1216 12:08:28.293208 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:08:28.724149 systemd-networkd[1471]: calibd8d706f23c: Gained IPv6LL Dec 16 12:08:28.724430 systemd-networkd[1471]: cali444223cc15a: Gained IPv6LL Dec 16 12:08:28.743099 kubelet[2713]: E1216 12:08:28.743061 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:08:28.746166 kubelet[2713]: E1216 12:08:28.746060 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:08:28.748221 kubelet[2713]: E1216 12:08:28.748126 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:28.748336 kubelet[2713]: E1216 12:08:28.748306 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:08:28.749095 kubelet[2713]: E1216 12:08:28.749064 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:08:28.764000 audit[4913]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:28.764000 audit[4913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdae551e0 a2=0 a3=1 items=0 ppid=2826 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:28.773000 audit[4913]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:28.773000 audit[4913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdae551e0 a2=0 a3=1 items=0 ppid=2826 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:28.797089 kubelet[2713]: I1216 12:08:28.795394 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fp6fd" podStartSLOduration=39.795367233 podStartE2EDuration="39.795367233s" podCreationTimestamp="2025-12-16 12:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:08:28.795106026 +0000 UTC m=+45.411507477" watchObservedRunningTime="2025-12-16 12:08:28.795367233 +0000 UTC m=+45.411768684" Dec 16 12:08:28.806000 audit[4915]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:28.806000 audit[4915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcdddda40 a2=0 a3=1 items=0 ppid=2826 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:28.819000 audit[4915]: NETFILTER_CFG table=nat:145 family=2 entries=56 op=nft_register_chain pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:28.819000 audit[4915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcdddda40 a2=0 a3=1 items=0 ppid=2826 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:28.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:29.107329 systemd-networkd[1471]: cali3bc43c69934: Gained IPv6LL Dec 16 12:08:29.427276 systemd-networkd[1471]: cali7406992a0ee: Gained IPv6LL Dec 16 12:08:29.491330 systemd-networkd[1471]: cali220adb729a6: Gained IPv6LL Dec 16 12:08:29.750188 kubelet[2713]: E1216 12:08:29.749738 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:29.751243 kubelet[2713]: E1216 12:08:29.751181 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:08:29.751446 kubelet[2713]: E1216 12:08:29.751286 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:08:30.751482 kubelet[2713]: E1216 12:08:30.751440 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:32.224384 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:54622.service - OpenSSH per-connection server daemon (10.0.0.1:54622). Dec 16 12:08:32.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.13:22-10.0.0.1:54622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.227484 kernel: kauditd_printk_skb: 163 callbacks suppressed Dec 16 12:08:32.227531 kernel: audit: type=1130 audit(1765886912.223:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.13:22-10.0.0.1:54622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.299000 audit[4918]: USER_ACCT pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.301894 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 54622 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:32.304357 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:32.300000 audit[4918]: CRED_ACQ pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.309800 kernel: audit: type=1101 audit(1765886912.299:751): pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.309848 kernel: audit: type=1103 audit(1765886912.300:752): pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.309862 kernel: audit: type=1006 audit(1765886912.300:753): pid=4918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:08:32.311874 systemd-logind[1531]: New session 11 of user core. Dec 16 12:08:32.318758 kernel: audit: type=1300 audit(1765886912.300:753): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee74f110 a2=3 a3=0 items=0 ppid=1 pid=4918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:32.318836 kernel: audit: type=1327 audit(1765886912.300:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:32.300000 audit[4918]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee74f110 a2=3 a3=0 items=0 ppid=1 pid=4918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:32.300000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:32.327242 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:08:32.330000 audit[4918]: USER_START pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.330000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.342963 kernel: audit: type=1105 audit(1765886912.330:754): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.343030 kernel: audit: type=1103 audit(1765886912.330:755): pid=4922 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.503324 sshd[4922]: Connection closed by 10.0.0.1 port 54622 Dec 16 12:08:32.504748 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:32.505000 audit[4918]: USER_END pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.505000 audit[4918]: CRED_DISP pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.514650 kernel: audit: type=1106 audit(1765886912.505:756): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.514695 kernel: audit: type=1104 audit(1765886912.505:757): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.515374 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:54622.service: Deactivated successfully. Dec 16 12:08:32.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.13:22-10.0.0.1:54622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.517043 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:08:32.517674 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:08:32.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.13:22-10.0.0.1:54626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.519973 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626). Dec 16 12:08:32.520488 systemd-logind[1531]: Removed session 11. Dec 16 12:08:32.571000 audit[4936]: USER_ACCT pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.572963 sshd[4936]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:32.573000 audit[4936]: CRED_ACQ pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.573000 audit[4936]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd4f84d0 a2=3 a3=0 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:32.573000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:32.575575 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:32.582649 systemd-logind[1531]: New session 12 of user core. Dec 16 12:08:32.593200 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:08:32.594000 audit[4936]: USER_START pid=4936 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.595000 audit[4940]: CRED_ACQ pid=4940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.720961 sshd[4940]: Connection closed by 10.0.0.1 port 54626 Dec 16 12:08:32.721330 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:32.724000 audit[4936]: USER_END pid=4936 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.724000 audit[4936]: CRED_DISP pid=4936 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.730203 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:54626.service: Deactivated successfully. Dec 16 12:08:32.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.13:22-10.0.0.1:54626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.732781 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:08:32.734429 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:08:32.736291 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:54628.service - OpenSSH per-connection server daemon (10.0.0.1:54628). Dec 16 12:08:32.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.13:22-10.0.0.1:54628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.739078 systemd-logind[1531]: Removed session 12. Dec 16 12:08:32.801000 audit[4952]: USER_ACCT pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.803218 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 54628 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:32.802000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.803000 audit[4952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed4321b0 a2=3 a3=0 items=0 ppid=1 pid=4952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:32.803000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:32.804816 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:32.809211 systemd-logind[1531]: New session 13 of user core. Dec 16 12:08:32.819198 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:08:32.820000 audit[4952]: USER_START pid=4952 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.821000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.913218 sshd[4956]: Connection closed by 10.0.0.1 port 54628 Dec 16 12:08:32.913545 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:32.913000 audit[4952]: USER_END pid=4952 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.913000 audit[4952]: CRED_DISP pid=4952 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:32.917751 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:08:32.917962 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:54628.service: Deactivated successfully. Dec 16 12:08:32.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.13:22-10.0.0.1:54628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:32.920669 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:08:32.922754 systemd-logind[1531]: Removed session 13. Dec 16 12:08:34.558041 containerd[1557]: time="2025-12-16T12:08:34.557971030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:08:34.759209 containerd[1557]: time="2025-12-16T12:08:34.759159365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:34.760229 containerd[1557]: time="2025-12-16T12:08:34.760185830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:08:34.760370 containerd[1557]: time="2025-12-16T12:08:34.760265032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:34.760419 kubelet[2713]: E1216 12:08:34.760377 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:08:34.760726 kubelet[2713]: E1216 12:08:34.760433 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:08:34.760726 kubelet[2713]: E1216 12:08:34.760551 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:55f73bdb83b841a1b7d2492df2fc36c9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qwdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c68d78889-wl87r_calico-system(ad89a778-3861-43c0-9fbf-a4ff7b7dedcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:34.763370 containerd[1557]: time="2025-12-16T12:08:34.763343387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:08:34.982449 containerd[1557]: time="2025-12-16T12:08:34.982213832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:34.983431 containerd[1557]: time="2025-12-16T12:08:34.983332499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:08:34.983516 containerd[1557]: time="2025-12-16T12:08:34.983402901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:34.983589 kubelet[2713]: E1216 12:08:34.983541 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:08:34.983631 kubelet[2713]: E1216 12:08:34.983594 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:08:34.983765 kubelet[2713]: E1216 12:08:34.983726 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c68d78889-wl87r_calico-system(ad89a778-3861-43c0-9fbf-a4ff7b7dedcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:34.984969 kubelet[2713]: E1216 12:08:34.984918 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c68d78889-wl87r" podUID="ad89a778-3861-43c0-9fbf-a4ff7b7dedcf" Dec 16 12:08:37.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.13:22-10.0.0.1:54640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:37.930181 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:54640.service - OpenSSH per-connection server daemon (10.0.0.1:54640). Dec 16 12:08:37.931140 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:08:37.931258 kernel: audit: type=1130 audit(1765886917.929:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.13:22-10.0.0.1:54640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:37.996000 audit[4975]: USER_ACCT pid=4975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:37.997849 sshd[4975]: Accepted publickey for core from 10.0.0.1 port 54640 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:38.002038 kernel: audit: type=1101 audit(1765886917.996:778): pid=4975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.003000 audit[4975]: CRED_ACQ pid=4975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.005558 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:38.010161 kernel: audit: type=1103 audit(1765886918.003:779): pid=4975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.010227 kernel: audit: type=1006 audit(1765886918.003:780): pid=4975 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:08:38.003000 audit[4975]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebc63b70 a2=3 a3=0 items=0 ppid=1 pid=4975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:38.014166 kernel: audit: type=1300 audit(1765886918.003:780): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebc63b70 a2=3 a3=0 items=0 ppid=1 pid=4975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:38.014270 kernel: audit: type=1327 audit(1765886918.003:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:38.003000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:38.017859 systemd-logind[1531]: New session 14 of user core. Dec 16 12:08:38.027285 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:08:38.029000 audit[4975]: USER_START pid=4975 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.032000 audit[4981]: CRED_ACQ pid=4981 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.038665 kernel: audit: type=1105 audit(1765886918.029:781): pid=4975 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.038709 kernel: audit: type=1103 audit(1765886918.032:782): pid=4981 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.121956 sshd[4981]: Connection closed by 10.0.0.1 port 54640 Dec 16 12:08:38.122238 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:38.122000 audit[4975]: USER_END pid=4975 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.126981 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:54640.service: Deactivated successfully. Dec 16 12:08:38.122000 audit[4975]: CRED_DISP pid=4975 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.130458 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:08:38.131297 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:08:38.132001 kernel: audit: type=1106 audit(1765886918.122:783): pid=4975 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.132076 kernel: audit: type=1104 audit(1765886918.122:784): pid=4975 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:38.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.13:22-10.0.0.1:54640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:38.132898 systemd-logind[1531]: Removed session 14. Dec 16 12:08:39.553959 containerd[1557]: time="2025-12-16T12:08:39.553910733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:08:39.776214 containerd[1557]: time="2025-12-16T12:08:39.776160760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:39.777181 containerd[1557]: time="2025-12-16T12:08:39.777132501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:08:39.777272 containerd[1557]: time="2025-12-16T12:08:39.777204182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:39.777363 kubelet[2713]: E1216 12:08:39.777306 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:08:39.777363 kubelet[2713]: E1216 12:08:39.777357 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:08:39.777687 kubelet[2713]: E1216 12:08:39.777497 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crn5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2rr6k_calico-system(cce57e8e-9a64-42b6-8b26-7e3f06d75548): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:39.778719 kubelet[2713]: E1216 12:08:39.778657 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:08:40.557588 containerd[1557]: time="2025-12-16T12:08:40.557302384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:08:40.782971 containerd[1557]: time="2025-12-16T12:08:40.782915801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:40.783962 containerd[1557]: time="2025-12-16T12:08:40.783905341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:08:40.784012 containerd[1557]: time="2025-12-16T12:08:40.783954702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:40.784140 kubelet[2713]: E1216 12:08:40.784101 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:08:40.785083 kubelet[2713]: E1216 12:08:40.784146 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:08:40.785083 kubelet[2713]: E1216 12:08:40.784365 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnh8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6dbc8bbd-fps4t_calico-system(bc6dd9ec-9c70-48ca-8951-46abf4919f50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:40.785242 containerd[1557]: time="2025-12-16T12:08:40.784581835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:08:40.785769 kubelet[2713]: E1216 12:08:40.785734 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:08:40.983648 containerd[1557]: time="2025-12-16T12:08:40.983503537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:40.984611 containerd[1557]: time="2025-12-16T12:08:40.984575599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:08:40.984731 containerd[1557]: time="2025-12-16T12:08:40.984654081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:40.984938 kubelet[2713]: E1216 12:08:40.984877 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:40.984938 kubelet[2713]: E1216 12:08:40.984935 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:40.985131 kubelet[2713]: E1216 12:08:40.985090 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl7zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5b89cfdd-jgs6w_calico-apiserver(0ff6ef5a-c884-4b41-929a-344b8f5bf998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:40.986390 kubelet[2713]: E1216 12:08:40.986347 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:08:41.554209 containerd[1557]: time="2025-12-16T12:08:41.554104047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:08:41.775527 containerd[1557]: time="2025-12-16T12:08:41.775479580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:41.776506 containerd[1557]: time="2025-12-16T12:08:41.776469000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:08:41.776571 containerd[1557]: time="2025-12-16T12:08:41.776515801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:41.776695 kubelet[2713]: E1216 12:08:41.776656 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:41.776759 kubelet[2713]: E1216 12:08:41.776709 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:08:41.776902 kubelet[2713]: E1216 12:08:41.776860 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94n8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5b89cfdd-g5j9p_calico-apiserver(35f02a41-1704-43ba-9b8e-82676bfbc14d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:41.778110 kubelet[2713]: E1216 12:08:41.778059 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:08:42.553639 containerd[1557]: time="2025-12-16T12:08:42.553587371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:08:42.770407 containerd[1557]: time="2025-12-16T12:08:42.770345620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:42.771539 containerd[1557]: time="2025-12-16T12:08:42.771499563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:08:42.771619 containerd[1557]: time="2025-12-16T12:08:42.771529404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:42.771842 kubelet[2713]: E1216 12:08:42.771797 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:08:42.772393 kubelet[2713]: E1216 12:08:42.772181 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:08:42.772393 kubelet[2713]: E1216 12:08:42.772334 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26t9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:42.774577 containerd[1557]: time="2025-12-16T12:08:42.774521623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:08:42.991627 containerd[1557]: time="2025-12-16T12:08:42.991453796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:08:42.992916 containerd[1557]: time="2025-12-16T12:08:42.992828383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:08:42.993038 containerd[1557]: time="2025-12-16T12:08:42.992892984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:08:42.993144 kubelet[2713]: E1216 12:08:42.993099 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:08:42.993245 kubelet[2713]: E1216 12:08:42.993154 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:08:42.993245 kubelet[2713]: E1216 12:08:42.993282 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26t9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:08:42.994804 kubelet[2713]: E1216 12:08:42.994772 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:43.134755 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:34760.service - OpenSSH per-connection server daemon (10.0.0.1:34760). Dec 16 12:08:43.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.13:22-10.0.0.1:34760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:43.136160 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:08:43.136217 kernel: audit: type=1130 audit(1765886923.133:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.13:22-10.0.0.1:34760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:43.202000 audit[5000]: USER_ACCT pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.203598 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 34760 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:43.206583 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:43.204000 audit[5000]: CRED_ACQ pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.210893 kernel: audit: type=1101 audit(1765886923.202:787): pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.210955 kernel: audit: type=1103 audit(1765886923.204:788): pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.213194 kernel: audit: type=1006 audit(1765886923.204:789): pid=5000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:08:43.213282 kernel: audit: type=1300 audit(1765886923.204:789): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc466e60 a2=3 a3=0 items=0 ppid=1 pid=5000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:43.204000 audit[5000]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc466e60 a2=3 a3=0 items=0 ppid=1 pid=5000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:43.215552 systemd-logind[1531]: New session 15 of user core. Dec 16 12:08:43.204000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:43.218648 kernel: audit: type=1327 audit(1765886923.204:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:43.225252 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:08:43.228000 audit[5000]: USER_START pid=5000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.230000 audit[5004]: CRED_ACQ pid=5004 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.237483 kernel: audit: type=1105 audit(1765886923.228:790): pid=5000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.237580 kernel: audit: type=1103 audit(1765886923.230:791): pid=5004 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.318569 sshd[5004]: Connection closed by 10.0.0.1 port 34760 Dec 16 12:08:43.318917 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:43.319000 audit[5000]: USER_END pid=5000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.324048 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:34760.service: Deactivated successfully. Dec 16 12:08:43.319000 audit[5000]: CRED_DISP pid=5000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.327386 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:08:43.328564 kernel: audit: type=1106 audit(1765886923.319:792): pid=5000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.328678 kernel: audit: type=1104 audit(1765886923.319:793): pid=5000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:43.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.13:22-10.0.0.1:34760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:43.329251 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:08:43.331228 systemd-logind[1531]: Removed session 15. Dec 16 12:08:47.554666 kubelet[2713]: E1216 12:08:47.554610 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c68d78889-wl87r" podUID="ad89a778-3861-43c0-9fbf-a4ff7b7dedcf" Dec 16 12:08:48.334775 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:34772.service - OpenSSH per-connection server daemon (10.0.0.1:34772). Dec 16 12:08:48.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.13:22-10.0.0.1:34772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:48.339019 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:08:48.339087 kernel: audit: type=1130 audit(1765886928.334:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.13:22-10.0.0.1:34772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:48.403000 audit[5022]: USER_ACCT pid=5022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.404973 sshd[5022]: Accepted publickey for core from 10.0.0.1 port 34772 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:48.409054 kernel: audit: type=1101 audit(1765886928.403:796): pid=5022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.408000 audit[5022]: CRED_ACQ pid=5022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.410823 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:48.414590 kernel: audit: type=1103 audit(1765886928.408:797): pid=5022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.414657 kernel: audit: type=1006 audit(1765886928.409:798): pid=5022 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:08:48.409000 audit[5022]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff755ceb0 a2=3 a3=0 items=0 ppid=1 pid=5022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:48.416717 systemd-logind[1531]: New session 16 of user core. Dec 16 12:08:48.418500 kernel: audit: type=1300 audit(1765886928.409:798): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff755ceb0 a2=3 a3=0 items=0 ppid=1 pid=5022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:48.409000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:48.420090 kernel: audit: type=1327 audit(1765886928.409:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:48.431228 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:08:48.433000 audit[5022]: USER_START pid=5022 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.434000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.442170 kernel: audit: type=1105 audit(1765886928.433:799): pid=5022 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.442340 kernel: audit: type=1103 audit(1765886928.434:800): pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.558897 sshd[5026]: Connection closed by 10.0.0.1 port 34772 Dec 16 12:08:48.559232 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:48.559000 audit[5022]: USER_END pid=5022 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.563321 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:08:48.563512 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:34772.service: Deactivated successfully. Dec 16 12:08:48.559000 audit[5022]: CRED_DISP pid=5022 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.566268 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:08:48.568298 kernel: audit: type=1106 audit(1765886928.559:801): pid=5022 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.568381 kernel: audit: type=1104 audit(1765886928.559:802): pid=5022 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:48.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.13:22-10.0.0.1:34772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:48.568431 systemd-logind[1531]: Removed session 16. Dec 16 12:08:48.786346 kubelet[2713]: E1216 12:08:48.786041 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:52.555038 kubelet[2713]: E1216 12:08:52.553833 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:08:53.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.13:22-10.0.0.1:60652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:53.572556 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:60652.service - OpenSSH per-connection server daemon (10.0.0.1:60652). Dec 16 12:08:53.573649 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:08:53.573688 kernel: audit: type=1130 audit(1765886933.571:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.13:22-10.0.0.1:60652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:53.649000 audit[5067]: USER_ACCT pid=5067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.654407 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 60652 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:53.658349 kernel: audit: type=1101 audit(1765886933.649:805): pid=5067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.658433 kernel: audit: type=1103 audit(1765886933.653:806): pid=5067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.653000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.656136 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:53.660961 kernel: audit: type=1006 audit(1765886933.653:807): pid=5067 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:08:53.653000 audit[5067]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe586a340 a2=3 a3=0 items=0 ppid=1 pid=5067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:53.663539 systemd-logind[1531]: New session 17 of user core. Dec 16 12:08:53.664867 kernel: audit: type=1300 audit(1765886933.653:807): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe586a340 a2=3 a3=0 items=0 ppid=1 pid=5067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:53.664925 kernel: audit: type=1327 audit(1765886933.653:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:53.653000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:53.673253 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:08:53.675000 audit[5067]: USER_START pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.677000 audit[5071]: CRED_ACQ pid=5071 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.683954 kernel: audit: type=1105 audit(1765886933.675:808): pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.684010 kernel: audit: type=1103 audit(1765886933.677:809): pid=5071 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.836217 sshd[5071]: Connection closed by 10.0.0.1 port 60652 Dec 16 12:08:53.836779 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:53.837000 audit[5067]: USER_END pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.837000 audit[5067]: CRED_DISP pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.846619 kernel: audit: type=1106 audit(1765886933.837:810): pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.846711 kernel: audit: type=1104 audit(1765886933.837:811): pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.851491 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:60652.service: Deactivated successfully. Dec 16 12:08:53.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.13:22-10.0.0.1:60652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:53.854141 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:08:53.858088 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:08:53.860237 systemd-logind[1531]: Removed session 17. Dec 16 12:08:53.862514 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:60668.service - OpenSSH per-connection server daemon (10.0.0.1:60668). Dec 16 12:08:53.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.13:22-10.0.0.1:60668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:53.928000 audit[5085]: USER_ACCT pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.929774 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 60668 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:53.929000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.929000 audit[5085]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea14e4c0 a2=3 a3=0 items=0 ppid=1 pid=5085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:53.929000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:53.931565 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:53.936077 systemd-logind[1531]: New session 18 of user core. Dec 16 12:08:53.946191 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:08:53.947000 audit[5085]: USER_START pid=5085 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:53.949000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.125551 sshd[5089]: Connection closed by 10.0.0.1 port 60668 Dec 16 12:08:54.125946 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:54.127000 audit[5085]: USER_END pid=5085 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.127000 audit[5085]: CRED_DISP pid=5085 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.136635 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:60668.service: Deactivated successfully. Dec 16 12:08:54.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.13:22-10.0.0.1:60668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:54.139130 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:08:54.140331 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:08:54.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.13:22-10.0.0.1:60676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:54.143374 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:60676.service - OpenSSH per-connection server daemon (10.0.0.1:60676). Dec 16 12:08:54.145332 systemd-logind[1531]: Removed session 18. Dec 16 12:08:54.210000 audit[5100]: USER_ACCT pid=5100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.212466 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 60676 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:54.212000 audit[5100]: CRED_ACQ pid=5100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.212000 audit[5100]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecf73bd0 a2=3 a3=0 items=0 ppid=1 pid=5100 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:54.212000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:54.214254 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:54.218891 systemd-logind[1531]: New session 19 of user core. Dec 16 12:08:54.230438 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:08:54.232000 audit[5100]: USER_START pid=5100 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.233000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.553430 kubelet[2713]: E1216 12:08:54.552980 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:08:54.553430 kubelet[2713]: E1216 12:08:54.553097 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:08:54.833000 audit[5118]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:54.833000 audit[5118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffd437a30 a2=0 a3=1 items=0 ppid=2826 pid=5118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:54.833000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:54.842512 sshd[5104]: Connection closed by 10.0.0.1 port 60676 Dec 16 12:08:54.842684 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:54.842000 audit[5118]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:54.842000 audit[5118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffd437a30 a2=0 a3=1 items=0 ppid=2826 pid=5118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:54.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:54.845000 audit[5100]: USER_END pid=5100 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.845000 audit[5100]: CRED_DISP pid=5100 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.853807 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:60676.service: Deactivated successfully. Dec 16 12:08:54.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.13:22-10.0.0.1:60676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:54.857711 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:08:54.861216 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:08:54.860000 audit[5122]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:54.860000 audit[5122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd1e60e80 a2=0 a3=1 items=0 ppid=2826 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:54.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:54.865000 audit[5122]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:54.865000 audit[5122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd1e60e80 a2=0 a3=1 items=0 ppid=2826 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:54.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:54.869377 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:60686.service - OpenSSH per-connection server daemon (10.0.0.1:60686). Dec 16 12:08:54.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.13:22-10.0.0.1:60686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:54.870385 systemd-logind[1531]: Removed session 19. Dec 16 12:08:54.937000 audit[5125]: USER_ACCT pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.938578 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 60686 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:54.939000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.939000 audit[5125]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7eb61c0 a2=3 a3=0 items=0 ppid=1 pid=5125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:54.939000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:54.941260 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:54.949315 systemd-logind[1531]: New session 20 of user core. Dec 16 12:08:54.956198 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:08:54.958000 audit[5125]: USER_START pid=5125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:54.961000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.210239 sshd[5129]: Connection closed by 10.0.0.1 port 60686 Dec 16 12:08:55.212217 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:55.213000 audit[5125]: USER_END pid=5125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.213000 audit[5125]: CRED_DISP pid=5125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.221932 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:60686.service: Deactivated successfully. Dec 16 12:08:55.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.13:22-10.0.0.1:60686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:55.225060 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:08:55.226659 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:08:55.230175 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:60696.service - OpenSSH per-connection server daemon (10.0.0.1:60696). Dec 16 12:08:55.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.13:22-10.0.0.1:60696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:55.232277 systemd-logind[1531]: Removed session 20. Dec 16 12:08:55.305573 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 60696 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:08:55.304000 audit[5140]: USER_ACCT pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.306000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.306000 audit[5140]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcce4d0b0 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:55.306000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:08:55.308423 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:08:55.315683 systemd-logind[1531]: New session 21 of user core. Dec 16 12:08:55.321215 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:08:55.323000 audit[5140]: USER_START pid=5140 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.325000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.411334 sshd[5144]: Connection closed by 10.0.0.1 port 60696 Dec 16 12:08:55.411383 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Dec 16 12:08:55.412000 audit[5140]: USER_END pid=5140 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.412000 audit[5140]: CRED_DISP pid=5140 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:08:55.416373 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:60696.service: Deactivated successfully. Dec 16 12:08:55.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.13:22-10.0.0.1:60696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:08:55.418688 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:08:55.419889 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:08:55.421396 systemd-logind[1531]: Removed session 21. Dec 16 12:08:55.555605 kubelet[2713]: E1216 12:08:55.555386 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:08:56.552906 kubelet[2713]: E1216 12:08:56.552850 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:08:56.553683 kubelet[2713]: E1216 12:08:56.553643 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:08:59.365000 audit[5168]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:59.367167 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:08:59.367267 kernel: audit: type=1325 audit(1765886939.365:853): table=filter:150 family=2 entries=26 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:59.365000 audit[5168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc950b640 a2=0 a3=1 items=0 ppid=2826 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:59.373478 kernel: audit: type=1300 audit(1765886939.365:853): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc950b640 a2=0 a3=1 items=0 ppid=2826 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:59.373549 kernel: audit: type=1327 audit(1765886939.365:853): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:59.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:59.380000 audit[5168]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:59.380000 audit[5168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc950b640 a2=0 a3=1 items=0 ppid=2826 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:59.387968 kernel: audit: type=1325 audit(1765886939.380:854): table=nat:151 family=2 entries=104 op=nft_register_chain pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:08:59.388045 kernel: audit: type=1300 audit(1765886939.380:854): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc950b640 a2=0 a3=1 items=0 ppid=2826 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:08:59.388077 kernel: audit: type=1327 audit(1765886939.380:854): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:59.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:08:59.552676 kubelet[2713]: E1216 12:08:59.552643 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:09:00.427963 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:60706.service - OpenSSH per-connection server daemon (10.0.0.1:60706). Dec 16 12:09:00.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.13:22-10.0.0.1:60706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:00.435074 kernel: audit: type=1130 audit(1765886940.427:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.13:22-10.0.0.1:60706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:00.503000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.509133 kernel: audit: type=1101 audit(1765886940.503:856): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.509226 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 60706 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:09:00.508000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.514464 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:09:00.517092 kernel: audit: type=1103 audit(1765886940.508:857): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.518002 kernel: audit: type=1006 audit(1765886940.512:858): pid=5170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:09:00.512000 audit[5170]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb6faf20 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:09:00.512000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:09:00.526416 systemd-logind[1531]: New session 22 of user core. Dec 16 12:09:00.532238 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:09:00.534000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.537000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.555093 containerd[1557]: time="2025-12-16T12:09:00.555038692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:09:00.641769 sshd[5174]: Connection closed by 10.0.0.1 port 60706 Dec 16 12:09:00.642150 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Dec 16 12:09:00.642000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.642000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:00.647161 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:09:00.647456 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:60706.service: Deactivated successfully. Dec 16 12:09:00.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.13:22-10.0.0.1:60706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:00.651037 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:09:00.653848 systemd-logind[1531]: Removed session 22. Dec 16 12:09:00.797933 containerd[1557]: time="2025-12-16T12:09:00.797807955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:00.799649 containerd[1557]: time="2025-12-16T12:09:00.799607379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:09:00.799997 containerd[1557]: time="2025-12-16T12:09:00.799631779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:00.800194 kubelet[2713]: E1216 12:09:00.800082 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:09:00.800989 kubelet[2713]: E1216 12:09:00.800513 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:09:00.801230 kubelet[2713]: E1216 12:09:00.801159 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:55f73bdb83b841a1b7d2492df2fc36c9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qwdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c68d78889-wl87r_calico-system(ad89a778-3861-43c0-9fbf-a4ff7b7dedcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:00.803675 containerd[1557]: time="2025-12-16T12:09:00.803575670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:09:01.049291 containerd[1557]: time="2025-12-16T12:09:01.049157917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:01.050172 containerd[1557]: time="2025-12-16T12:09:01.050130449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:09:01.050267 containerd[1557]: time="2025-12-16T12:09:01.050221810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:01.050417 kubelet[2713]: E1216 12:09:01.050377 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:09:01.050490 kubelet[2713]: E1216 12:09:01.050432 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:09:01.051066 kubelet[2713]: E1216 12:09:01.050988 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c68d78889-wl87r_calico-system(ad89a778-3861-43c0-9fbf-a4ff7b7dedcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:01.052825 kubelet[2713]: E1216 12:09:01.052755 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c68d78889-wl87r" podUID="ad89a778-3861-43c0-9fbf-a4ff7b7dedcf" Dec 16 12:09:05.554567 containerd[1557]: time="2025-12-16T12:09:05.554514661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:09:05.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.13:22-10.0.0.1:54138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:05.653447 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:54138.service - OpenSSH per-connection server daemon (10.0.0.1:54138). Dec 16 12:09:05.654264 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:09:05.654307 kernel: audit: type=1130 audit(1765886945.652:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.13:22-10.0.0.1:54138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:05.713000 audit[5188]: USER_ACCT pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.717944 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 54138 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:09:05.718255 kernel: audit: type=1101 audit(1765886945.713:865): pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.717000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.719185 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:09:05.723693 kernel: audit: type=1103 audit(1765886945.717:866): pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.723744 kernel: audit: type=1006 audit(1765886945.717:867): pid=5188 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:09:05.723838 kernel: audit: type=1300 audit(1765886945.717:867): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff72c2f80 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:09:05.717000 audit[5188]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff72c2f80 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:09:05.725960 systemd-logind[1531]: New session 23 of user core. Dec 16 12:09:05.717000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:09:05.728824 kernel: audit: type=1327 audit(1765886945.717:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:09:05.734232 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:09:05.735000 audit[5188]: USER_START pid=5188 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.737000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.744083 kernel: audit: type=1105 audit(1765886945.735:868): pid=5188 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.744178 kernel: audit: type=1103 audit(1765886945.737:869): pid=5192 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.781778 containerd[1557]: time="2025-12-16T12:09:05.781732067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:05.784462 containerd[1557]: time="2025-12-16T12:09:05.783845372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:09:05.784462 containerd[1557]: time="2025-12-16T12:09:05.783929733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:05.784568 kubelet[2713]: E1216 12:09:05.784133 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:09:05.784568 kubelet[2713]: E1216 12:09:05.784185 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:09:05.784568 kubelet[2713]: E1216 12:09:05.784316 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crn5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2rr6k_calico-system(cce57e8e-9a64-42b6-8b26-7e3f06d75548): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:05.785811 kubelet[2713]: E1216 12:09:05.785760 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2rr6k" podUID="cce57e8e-9a64-42b6-8b26-7e3f06d75548" Dec 16 12:09:05.859667 sshd[5192]: Connection closed by 10.0.0.1 port 54138 Dec 16 12:09:05.860332 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Dec 16 12:09:05.861000 audit[5188]: USER_END pid=5188 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.861000 audit[5188]: CRED_DISP pid=5188 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.868106 kernel: audit: type=1106 audit(1765886945.861:870): pid=5188 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.870356 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:54138.service: Deactivated successfully. Dec 16 12:09:05.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.13:22-10.0.0.1:54138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:05.874046 kernel: audit: type=1104 audit(1765886945.861:871): pid=5188 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:05.875417 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:09:05.876471 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:09:05.878354 systemd-logind[1531]: Removed session 23. Dec 16 12:09:06.553232 containerd[1557]: time="2025-12-16T12:09:06.553171241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:09:06.762416 containerd[1557]: time="2025-12-16T12:09:06.762357228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:06.763351 containerd[1557]: time="2025-12-16T12:09:06.763278398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:09:06.763435 containerd[1557]: time="2025-12-16T12:09:06.763320159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:06.763523 kubelet[2713]: E1216 12:09:06.763486 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:09:06.763592 kubelet[2713]: E1216 12:09:06.763549 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:09:06.764056 kubelet[2713]: E1216 12:09:06.763678 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl7zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5b89cfdd-jgs6w_calico-apiserver(0ff6ef5a-c884-4b41-929a-344b8f5bf998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:06.765303 kubelet[2713]: E1216 12:09:06.765250 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-jgs6w" podUID="0ff6ef5a-c884-4b41-929a-344b8f5bf998" Dec 16 12:09:07.555896 containerd[1557]: time="2025-12-16T12:09:07.555808954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:09:07.769211 containerd[1557]: time="2025-12-16T12:09:07.769149619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:07.770241 containerd[1557]: time="2025-12-16T12:09:07.770185511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:09:07.770384 containerd[1557]: time="2025-12-16T12:09:07.770254951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:07.770453 kubelet[2713]: E1216 12:09:07.770414 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:09:07.770719 kubelet[2713]: E1216 12:09:07.770465 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:09:07.770719 kubelet[2713]: E1216 12:09:07.770607 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnh8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6dbc8bbd-fps4t_calico-system(bc6dd9ec-9c70-48ca-8951-46abf4919f50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:07.771920 kubelet[2713]: E1216 12:09:07.771863 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6dbc8bbd-fps4t" podUID="bc6dd9ec-9c70-48ca-8951-46abf4919f50" Dec 16 12:09:09.554247 containerd[1557]: time="2025-12-16T12:09:09.554191793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:09:09.775251 containerd[1557]: time="2025-12-16T12:09:09.775189728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:09.776264 containerd[1557]: time="2025-12-16T12:09:09.776209659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:09:09.776352 containerd[1557]: time="2025-12-16T12:09:09.776297059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:09.776470 kubelet[2713]: E1216 12:09:09.776426 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:09:09.776884 kubelet[2713]: E1216 12:09:09.776478 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:09:09.776884 kubelet[2713]: E1216 12:09:09.776615 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26t9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:09.780281 containerd[1557]: time="2025-12-16T12:09:09.780253382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:09:10.004265 containerd[1557]: time="2025-12-16T12:09:10.004199147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:10.012253 containerd[1557]: time="2025-12-16T12:09:10.009980528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:09:10.012253 containerd[1557]: time="2025-12-16T12:09:10.010029929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:10.012390 kubelet[2713]: E1216 12:09:10.012263 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:09:10.012390 kubelet[2713]: E1216 12:09:10.012318 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:09:10.012486 kubelet[2713]: E1216 12:09:10.012441 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26t9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fx4jj_calico-system(8b37b908-ae1e-416d-92b4-0b4c5064435f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:10.023307 kubelet[2713]: E1216 12:09:10.023249 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fx4jj" podUID="8b37b908-ae1e-416d-92b4-0b4c5064435f" Dec 16 12:09:10.555032 containerd[1557]: time="2025-12-16T12:09:10.554537265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:09:10.795035 containerd[1557]: time="2025-12-16T12:09:10.794975438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:09:10.796027 containerd[1557]: time="2025-12-16T12:09:10.795971049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:09:10.796149 containerd[1557]: time="2025-12-16T12:09:10.796074170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:09:10.796244 kubelet[2713]: E1216 12:09:10.796190 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:09:10.796541 kubelet[2713]: E1216 12:09:10.796242 2713 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:09:10.796541 kubelet[2713]: E1216 12:09:10.796370 2713 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94n8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5b89cfdd-g5j9p_calico-apiserver(35f02a41-1704-43ba-9b8e-82676bfbc14d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:09:10.797869 kubelet[2713]: E1216 12:09:10.797824 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5b89cfdd-g5j9p" podUID="35f02a41-1704-43ba-9b8e-82676bfbc14d" Dec 16 12:09:10.874484 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:54140.service - OpenSSH per-connection server daemon (10.0.0.1:54140). Dec 16 12:09:10.876006 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:09:10.876082 kernel: audit: type=1130 audit(1765886950.873:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.13:22-10.0.0.1:54140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:10.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.13:22-10.0.0.1:54140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:10.936000 audit[5208]: USER_ACCT pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.937763 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 54140 ssh2: RSA SHA256:/XyDWm2N9JZWGT3GjtC51FjWxFfjvfz9iEVjVrokVAY Dec 16 12:09:10.943251 kernel: audit: type=1101 audit(1765886950.936:874): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.943000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.945296 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:09:10.950129 kernel: audit: type=1103 audit(1765886950.943:875): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.950178 kernel: audit: type=1006 audit(1765886950.943:876): pid=5208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:09:10.943000 audit[5208]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe418e4f0 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:09:10.951579 systemd-logind[1531]: New session 24 of user core. Dec 16 12:09:10.953937 kernel: audit: type=1300 audit(1765886950.943:876): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe418e4f0 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:09:10.953987 kernel: audit: type=1327 audit(1765886950.943:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:09:10.943000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:09:10.961228 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:09:10.962000 audit[5208]: USER_START pid=5208 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.964000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.971254 kernel: audit: type=1105 audit(1765886950.962:877): pid=5208 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:10.971334 kernel: audit: type=1103 audit(1765886950.964:878): pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:11.044740 sshd[5212]: Connection closed by 10.0.0.1 port 54140 Dec 16 12:09:11.045219 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Dec 16 12:09:11.046000 audit[5208]: USER_END pid=5208 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:11.049488 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:09:11.049661 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:54140.service: Deactivated successfully. Dec 16 12:09:11.046000 audit[5208]: CRED_DISP pid=5208 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:11.052042 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:09:11.053512 systemd-logind[1531]: Removed session 24. Dec 16 12:09:11.053875 kernel: audit: type=1106 audit(1765886951.046:879): pid=5208 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:11.053924 kernel: audit: type=1104 audit(1765886951.046:880): pid=5208 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:09:11.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.13:22-10.0.0.1:54140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:09:11.554247 kubelet[2713]: E1216 12:09:11.553669 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"