Dec 12 17:40:19.345426 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:40:19.345454 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:40:19.345462 kernel: KASLR enabled Dec 12 17:40:19.345468 kernel: efi: EFI v2.7 by EDK II Dec 12 17:40:19.345474 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:40:19.345480 kernel: random: crng init done Dec 12 17:40:19.345487 kernel: secureboot: Secure boot disabled Dec 12 17:40:19.345493 kernel: ACPI: Early table checksum verification disabled Dec 12 17:40:19.345500 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:40:19.345507 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:40:19.345513 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345519 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345525 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345532 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345541 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345547 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345554 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345560 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345567 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:40:19.345573 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:40:19.345580 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:40:19.345586 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:40:19.345594 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:40:19.345600 kernel: Zone ranges: Dec 12 17:40:19.345607 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:40:19.345613 kernel: DMA32 empty Dec 12 17:40:19.345619 kernel: Normal empty Dec 12 17:40:19.345625 kernel: Device empty Dec 12 17:40:19.345632 kernel: Movable zone start for each node Dec 12 17:40:19.345638 kernel: Early memory node ranges Dec 12 17:40:19.345644 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:40:19.345651 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:40:19.345657 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:40:19.345663 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:40:19.345671 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:40:19.345677 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:40:19.345683 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:40:19.345690 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:40:19.345696 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:40:19.345702 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:40:19.345712 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:40:19.345719 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:40:19.345726 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:40:19.345733 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:40:19.345740 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:40:19.345747 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:40:19.345754 kernel: psci: probing for conduit method from ACPI. Dec 12 17:40:19.345760 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:40:19.345768 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:40:19.345775 kernel: psci: Trusted OS migration not required Dec 12 17:40:19.345782 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:40:19.345789 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:40:19.345796 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:40:19.345815 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:40:19.345823 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:40:19.345829 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:40:19.345842 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:40:19.345849 kernel: CPU features: detected: Spectre-v4 Dec 12 17:40:19.345856 kernel: CPU features: detected: Spectre-BHB Dec 12 17:40:19.345865 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:40:19.345872 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:40:19.345879 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:40:19.345885 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:40:19.345892 kernel: alternatives: applying boot alternatives Dec 12 17:40:19.345900 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:40:19.345907 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:40:19.345914 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:40:19.345921 kernel: Fallback order for Node 0: 0 Dec 12 17:40:19.345928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:40:19.345936 kernel: Policy zone: DMA Dec 12 17:40:19.345951 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:40:19.345957 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:40:19.345964 kernel: software IO TLB: area num 4. Dec 12 17:40:19.345971 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:40:19.345977 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:40:19.345984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:40:19.345991 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:40:19.345998 kernel: rcu: RCU event tracing is enabled. Dec 12 17:40:19.346005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:40:19.346012 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:40:19.346021 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:40:19.346028 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:40:19.346035 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:40:19.346042 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:40:19.346049 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:40:19.346056 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:40:19.346063 kernel: GICv3: 256 SPIs implemented Dec 12 17:40:19.346069 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:40:19.346076 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:40:19.346083 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:40:19.346089 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:40:19.346097 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:40:19.346104 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:40:19.346112 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:40:19.346119 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:40:19.346126 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:40:19.346133 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:40:19.346140 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:40:19.346147 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:40:19.346154 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:40:19.346161 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:40:19.346168 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:40:19.346176 kernel: arm-pv: using stolen time PV Dec 12 17:40:19.346184 kernel: Console: colour dummy device 80x25 Dec 12 17:40:19.346191 kernel: ACPI: Core revision 20240827 Dec 12 17:40:19.346199 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:40:19.346206 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:40:19.346214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:40:19.346221 kernel: landlock: Up and running. Dec 12 17:40:19.346228 kernel: SELinux: Initializing. Dec 12 17:40:19.346237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:40:19.346245 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:40:19.346252 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:40:19.346259 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:40:19.346267 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:40:19.346274 kernel: Remapping and enabling EFI services. Dec 12 17:40:19.346281 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:40:19.346290 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:40:19.346302 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:40:19.346311 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:40:19.346319 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:40:19.346327 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:40:19.346334 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:40:19.346342 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:40:19.346351 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:40:19.346359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:40:19.346367 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:40:19.346374 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:40:19.346382 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:40:19.346390 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:40:19.346397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:40:19.346406 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:40:19.346414 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:40:19.346422 kernel: SMP: Total of 4 processors activated. Dec 12 17:40:19.346429 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:40:19.346437 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:40:19.346458 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:40:19.346468 kernel: CPU features: detected: Common not Private translations Dec 12 17:40:19.346478 kernel: CPU features: detected: CRC32 instructions Dec 12 17:40:19.346486 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:40:19.346494 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:40:19.346502 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:40:19.346509 kernel: CPU features: detected: Privileged Access Never Dec 12 17:40:19.346516 kernel: CPU features: detected: RAS Extension Support Dec 12 17:40:19.346524 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:40:19.346532 kernel: alternatives: applying system-wide alternatives Dec 12 17:40:19.346542 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:40:19.346550 kernel: Memory: 2450912K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 99040K reserved, 16384K cma-reserved) Dec 12 17:40:19.346558 kernel: devtmpfs: initialized Dec 12 17:40:19.346566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:40:19.346573 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:40:19.346581 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:40:19.346589 kernel: 0 pages in range for non-PLT usage Dec 12 17:40:19.346598 kernel: 515184 pages in range for PLT usage Dec 12 17:40:19.346605 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:40:19.346613 kernel: SMBIOS 3.0.0 present. Dec 12 17:40:19.346620 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:40:19.346628 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:40:19.346636 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:40:19.346643 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:40:19.346653 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:40:19.346661 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:40:19.346668 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:40:19.346676 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Dec 12 17:40:19.346684 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:40:19.346691 kernel: cpuidle: using governor menu Dec 12 17:40:19.346699 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:40:19.346708 kernel: ASID allocator initialised with 32768 entries Dec 12 17:40:19.346716 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:40:19.346723 kernel: Serial: AMBA PL011 UART driver Dec 12 17:40:19.346731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:40:19.346739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:40:19.346746 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:40:19.346754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:40:19.346763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:40:19.346771 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:40:19.346778 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:40:19.346786 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:40:19.346793 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:40:19.346817 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:40:19.346825 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:40:19.346833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:40:19.346849 kernel: ACPI: Interpreter enabled Dec 12 17:40:19.346857 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:40:19.346864 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:40:19.346872 kernel: ACPI: CPU0 has been hot-added Dec 12 17:40:19.346880 kernel: ACPI: CPU1 has been hot-added Dec 12 17:40:19.346887 kernel: ACPI: CPU2 has been hot-added Dec 12 17:40:19.346895 kernel: ACPI: CPU3 has been hot-added Dec 12 17:40:19.346904 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:40:19.346912 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:40:19.346919 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:40:19.347098 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:40:19.347187 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:40:19.347271 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:40:19.347355 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:40:19.347437 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:40:19.347448 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:40:19.347456 kernel: PCI host bridge to bus 0000:00 Dec 12 17:40:19.347555 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:40:19.347670 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:40:19.347760 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:40:19.348039 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:40:19.348158 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:40:19.348254 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:40:19.348345 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:40:19.348430 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:40:19.348512 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:40:19.348603 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:40:19.348684 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:40:19.348765 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:40:19.348883 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:40:19.348966 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:40:19.349039 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:40:19.349049 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:40:19.349057 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:40:19.349065 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:40:19.349073 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:40:19.349081 kernel: iommu: Default domain type: Translated Dec 12 17:40:19.349091 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:40:19.349099 kernel: efivars: Registered efivars operations Dec 12 17:40:19.349106 kernel: vgaarb: loaded Dec 12 17:40:19.349114 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:40:19.349122 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:40:19.349130 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:40:19.349138 kernel: pnp: PnP ACPI init Dec 12 17:40:19.349233 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:40:19.349244 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:40:19.349251 kernel: NET: Registered PF_INET protocol family Dec 12 17:40:19.349259 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:40:19.349267 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:40:19.349275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:40:19.349282 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:40:19.349292 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:40:19.349300 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:40:19.349308 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:40:19.349316 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:40:19.349324 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:40:19.349331 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:40:19.349339 kernel: kvm [1]: HYP mode not available Dec 12 17:40:19.349348 kernel: Initialise system trusted keyrings Dec 12 17:40:19.349356 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:40:19.349363 kernel: Key type asymmetric registered Dec 12 17:40:19.349371 kernel: Asymmetric key parser 'x509' registered Dec 12 17:40:19.349378 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:40:19.349386 kernel: io scheduler mq-deadline registered Dec 12 17:40:19.349393 kernel: io scheduler kyber registered Dec 12 17:40:19.349403 kernel: io scheduler bfq registered Dec 12 17:40:19.349411 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:40:19.349418 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:40:19.349427 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:40:19.349510 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:40:19.349521 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:40:19.349529 kernel: thunder_xcv, ver 1.0 Dec 12 17:40:19.349539 kernel: thunder_bgx, ver 1.0 Dec 12 17:40:19.349546 kernel: nicpf, ver 1.0 Dec 12 17:40:19.349554 kernel: nicvf, ver 1.0 Dec 12 17:40:19.349650 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:40:19.349728 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:40:18 UTC (1765561218) Dec 12 17:40:19.349739 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:40:19.349749 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:40:19.349757 kernel: watchdog: NMI not fully supported Dec 12 17:40:19.349765 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:40:19.349772 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:40:19.349780 kernel: Segment Routing with IPv6 Dec 12 17:40:19.349787 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:40:19.349795 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:40:19.349814 kernel: Key type dns_resolver registered Dec 12 17:40:19.349825 kernel: registered taskstats version 1 Dec 12 17:40:19.349833 kernel: Loading compiled-in X.509 certificates Dec 12 17:40:19.349848 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:40:19.349856 kernel: Demotion targets for Node 0: null Dec 12 17:40:19.349864 kernel: Key type .fscrypt registered Dec 12 17:40:19.349887 kernel: Key type fscrypt-provisioning registered Dec 12 17:40:19.349896 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:40:19.349907 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:40:19.349915 kernel: ima: No architecture policies found Dec 12 17:40:19.349923 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:40:19.349930 kernel: clk: Disabling unused clocks Dec 12 17:40:19.349938 kernel: PM: genpd: Disabling unused power domains Dec 12 17:40:19.349945 kernel: Freeing unused kernel memory: 12416K Dec 12 17:40:19.349953 kernel: Run /init as init process Dec 12 17:40:19.349962 kernel: with arguments: Dec 12 17:40:19.349970 kernel: /init Dec 12 17:40:19.349978 kernel: with environment: Dec 12 17:40:19.349986 kernel: HOME=/ Dec 12 17:40:19.349993 kernel: TERM=linux Dec 12 17:40:19.350104 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:40:19.350187 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 12 17:40:19.350201 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:40:19.350209 kernel: GPT:16515071 != 27000831 Dec 12 17:40:19.350217 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:40:19.350225 kernel: GPT:16515071 != 27000831 Dec 12 17:40:19.350232 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:40:19.350239 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:40:19.350249 kernel: SCSI subsystem initialized Dec 12 17:40:19.350257 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:40:19.350265 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:40:19.350273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:40:19.350281 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:40:19.350288 kernel: raid6: neonx8 gen() 15710 MB/s Dec 12 17:40:19.350296 kernel: raid6: neonx4 gen() 15306 MB/s Dec 12 17:40:19.350305 kernel: raid6: neonx2 gen() 13105 MB/s Dec 12 17:40:19.350312 kernel: raid6: neonx1 gen() 10516 MB/s Dec 12 17:40:19.350320 kernel: raid6: int64x8 gen() 6748 MB/s Dec 12 17:40:19.350327 kernel: raid6: int64x4 gen() 7264 MB/s Dec 12 17:40:19.350335 kernel: raid6: int64x2 gen() 6099 MB/s Dec 12 17:40:19.350342 kernel: raid6: int64x1 gen() 5046 MB/s Dec 12 17:40:19.350349 kernel: raid6: using algorithm neonx8 gen() 15710 MB/s Dec 12 17:40:19.350359 kernel: raid6: .... xor() 11920 MB/s, rmw enabled Dec 12 17:40:19.350366 kernel: raid6: using neon recovery algorithm Dec 12 17:40:19.350374 kernel: xor: measuring software checksum speed Dec 12 17:40:19.350381 kernel: 8regs : 21596 MB/sec Dec 12 17:40:19.350389 kernel: 32regs : 21687 MB/sec Dec 12 17:40:19.350397 kernel: arm64_neon : 28089 MB/sec Dec 12 17:40:19.350404 kernel: xor: using function: arm64_neon (28089 MB/sec) Dec 12 17:40:19.350412 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:40:19.350421 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (207) Dec 12 17:40:19.350429 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:40:19.350437 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:40:19.350445 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:40:19.350452 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:40:19.350460 kernel: loop: module loaded Dec 12 17:40:19.350467 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:40:19.350476 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:40:19.350486 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:40:19.350497 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:40:19.350505 systemd[1]: Detected virtualization kvm. Dec 12 17:40:19.350513 systemd[1]: Detected architecture arm64. Dec 12 17:40:19.350522 systemd[1]: Running in initrd. Dec 12 17:40:19.350530 systemd[1]: No hostname configured, using default hostname. Dec 12 17:40:19.350539 systemd[1]: Hostname set to . Dec 12 17:40:19.350547 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:40:19.350554 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:40:19.350563 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:40:19.350571 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:40:19.350581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:40:19.350590 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:40:19.350598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:40:19.350607 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:40:19.350615 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:40:19.350625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:40:19.350634 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:40:19.350642 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:40:19.350650 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:40:19.350658 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:40:19.350666 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:40:19.350675 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:40:19.350684 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:40:19.350693 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:40:19.350701 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:40:19.350710 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:40:19.350725 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:40:19.350737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:40:19.350745 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:40:19.350754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:40:19.350764 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:40:19.350772 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:40:19.350781 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:40:19.350789 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:40:19.350820 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:40:19.350830 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:40:19.350848 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:40:19.350857 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:40:19.350865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:40:19.350877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:40:19.350886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:40:19.350895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:40:19.350903 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:40:19.350912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:40:19.350945 systemd-journald[351]: Collecting audit messages is enabled. Dec 12 17:40:19.350966 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:40:19.350974 kernel: Bridge firewalling registered Dec 12 17:40:19.350985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:40:19.350994 systemd-journald[351]: Journal started Dec 12 17:40:19.351014 systemd-journald[351]: Runtime Journal (/run/log/journal/a1332660f8d34c14941edae33a2fa734) is 6M, max 48.5M, 42.4M free. Dec 12 17:40:19.348502 systemd-modules-load[352]: Inserted module 'br_netfilter' Dec 12 17:40:19.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.356831 kernel: audit: type=1130 audit(1765561219.353:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.356872 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:40:19.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.361470 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:40:19.367032 kernel: audit: type=1130 audit(1765561219.357:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.367071 kernel: audit: type=1130 audit(1765561219.362:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.367024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:19.371871 kernel: audit: type=1130 audit(1765561219.368:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.371626 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:40:19.373872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:40:19.376010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:40:19.386946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:40:19.396665 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:40:19.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.402895 kernel: audit: type=1130 audit(1765561219.397:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.405010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:40:19.410908 kernel: audit: type=1130 audit(1765561219.406:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.410938 kernel: audit: type=1334 audit(1765561219.408:8): prog-id=6 op=LOAD Dec 12 17:40:19.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.408000 audit: BPF prog-id=6 op=LOAD Dec 12 17:40:19.406131 systemd-tmpfiles[376]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:40:19.409745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:40:19.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.413444 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:40:19.423446 kernel: audit: type=1130 audit(1765561219.414:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.423480 kernel: audit: type=1130 audit(1765561219.419:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.415127 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:40:19.426637 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:40:19.458851 dracut-cmdline[395]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:40:19.461509 systemd-resolved[390]: Positive Trust Anchors: Dec 12 17:40:19.461521 systemd-resolved[390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:40:19.461525 systemd-resolved[390]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:40:19.461557 systemd-resolved[390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:40:19.486953 systemd-resolved[390]: Defaulting to hostname 'linux'. Dec 12 17:40:19.489194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:40:19.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.490929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:40:19.549849 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:40:19.558847 kernel: iscsi: registered transport (tcp) Dec 12 17:40:19.572856 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:40:19.572919 kernel: QLogic iSCSI HBA Driver Dec 12 17:40:19.597822 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:40:19.617342 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:40:19.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.619726 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:40:19.674953 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:40:19.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.677264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:40:19.681181 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:40:19.725707 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:40:19.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.727000 audit: BPF prog-id=7 op=LOAD Dec 12 17:40:19.727000 audit: BPF prog-id=8 op=LOAD Dec 12 17:40:19.728881 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:40:19.765031 systemd-udevd[630]: Using default interface naming scheme 'v257'. Dec 12 17:40:19.774470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:40:19.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.779832 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:40:19.809703 dracut-pre-trigger[705]: rd.md=0: removing MD RAID activation Dec 12 17:40:19.817420 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:40:19.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.819000 audit: BPF prog-id=9 op=LOAD Dec 12 17:40:19.820625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:40:19.841030 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:40:19.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.843686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:40:19.878682 systemd-networkd[754]: lo: Link UP Dec 12 17:40:19.878691 systemd-networkd[754]: lo: Gained carrier Dec 12 17:40:19.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.879347 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:40:19.880706 systemd[1]: Reached target network.target - Network. Dec 12 17:40:19.907855 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:40:19.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:19.911434 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:40:19.962818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:40:19.976057 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:40:19.984237 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:40:19.996196 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:40:19.998581 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:40:20.016684 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:40:20.016824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:20.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:20.021209 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:40:20.022655 systemd-networkd[754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:40:20.022659 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:40:20.023853 systemd-networkd[754]: eth0: Link UP Dec 12 17:40:20.024031 systemd-networkd[754]: eth0: Gained carrier Dec 12 17:40:20.024044 systemd-networkd[754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:40:20.024884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:40:20.037890 systemd-networkd[754]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:40:20.105130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:40:20.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:20.106866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:40:20.108211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:40:20.110465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:40:20.113542 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:40:20.115501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:20.117675 disk-uuid[809]: Primary Header is updated. Dec 12 17:40:20.117675 disk-uuid[809]: Secondary Entries is updated. Dec 12 17:40:20.117675 disk-uuid[809]: Secondary Header is updated. Dec 12 17:40:20.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:20.141396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:40:20.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.142429 disk-uuid[823]: Warning: The kernel is still using the old partition table. Dec 12 17:40:21.142429 disk-uuid[823]: The new table will be used at the next reboot or after you Dec 12 17:40:21.142429 disk-uuid[823]: run partprobe(8) or kpartx(8) Dec 12 17:40:21.142429 disk-uuid[823]: The operation has completed successfully. Dec 12 17:40:21.151988 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:40:21.152899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:40:21.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.155173 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:40:21.182822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Dec 12 17:40:21.183367 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:40:21.185019 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:40:21.189933 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:40:21.190079 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:40:21.200063 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:40:21.200183 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:40:21.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.202764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:40:21.318122 ignition[859]: Ignition 2.22.0 Dec 12 17:40:21.318138 ignition[859]: Stage: fetch-offline Dec 12 17:40:21.318182 ignition[859]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:21.318191 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:40:21.318354 ignition[859]: parsed url from cmdline: "" Dec 12 17:40:21.318357 ignition[859]: no config URL provided Dec 12 17:40:21.318361 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:40:21.318370 ignition[859]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:40:21.318408 ignition[859]: op(1): [started] loading QEMU firmware config module Dec 12 17:40:21.318413 ignition[859]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:40:21.331309 ignition[859]: op(1): [finished] loading QEMU firmware config module Dec 12 17:40:21.331345 ignition[859]: QEMU firmware config was not found. Ignoring... Dec 12 17:40:21.377839 ignition[859]: parsing config with SHA512: 04c3478a25da0836f9b0179d9ae863bc7ca2eec53f050ee224203e5f001b80e392bb90fad49049e74b8a5060b581c6022549914d96fcce8ba385b7e034faeb78 Dec 12 17:40:21.383665 unknown[859]: fetched base config from "system" Dec 12 17:40:21.383678 unknown[859]: fetched user config from "qemu" Dec 12 17:40:21.384111 ignition[859]: fetch-offline: fetch-offline passed Dec 12 17:40:21.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.386452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:40:21.384175 ignition[859]: Ignition finished successfully Dec 12 17:40:21.389602 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:40:21.390519 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:40:21.427924 systemd-networkd[754]: eth0: Gained IPv6LL Dec 12 17:40:21.428241 ignition[871]: Ignition 2.22.0 Dec 12 17:40:21.428247 ignition[871]: Stage: kargs Dec 12 17:40:21.428398 ignition[871]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:21.428406 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:40:21.429635 ignition[871]: kargs: kargs passed Dec 12 17:40:21.429675 ignition[871]: Ignition finished successfully Dec 12 17:40:21.435379 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:40:21.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.437509 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:40:21.478237 ignition[879]: Ignition 2.22.0 Dec 12 17:40:21.478260 ignition[879]: Stage: disks Dec 12 17:40:21.478408 ignition[879]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:21.478417 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:40:21.479422 ignition[879]: disks: disks passed Dec 12 17:40:21.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.481402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:40:21.479470 ignition[879]: Ignition finished successfully Dec 12 17:40:21.482643 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:40:21.483785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:40:21.485664 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:40:21.487114 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:40:21.488741 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:40:21.491563 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:40:21.532724 systemd-fsck[889]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 12 17:40:21.639669 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:40:21.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.642372 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:40:21.763833 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:40:21.764023 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:40:21.765386 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:40:21.768754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:40:21.771227 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:40:21.772260 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:40:21.772297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:40:21.772322 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:40:21.785456 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:40:21.787686 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:40:21.794846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Dec 12 17:40:21.798322 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:40:21.798370 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:40:21.801104 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:40:21.801149 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:40:21.802716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:40:21.834005 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:40:21.838376 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:40:21.843383 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:40:21.847946 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:40:21.930771 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:40:21.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.933430 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:40:21.935344 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:40:21.955778 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:40:21.958096 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:40:21.971711 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:40:21.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.986570 ignition[1011]: INFO : Ignition 2.22.0 Dec 12 17:40:21.986570 ignition[1011]: INFO : Stage: mount Dec 12 17:40:21.988146 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:21.988146 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:40:21.988146 ignition[1011]: INFO : mount: mount passed Dec 12 17:40:21.988146 ignition[1011]: INFO : Ignition finished successfully Dec 12 17:40:21.992106 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:40:21.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:21.996926 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:40:22.765685 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:40:22.795836 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Dec 12 17:40:22.797911 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:40:22.797945 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:40:22.800935 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:40:22.800963 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:40:22.802457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:40:22.836085 ignition[1040]: INFO : Ignition 2.22.0 Dec 12 17:40:22.836085 ignition[1040]: INFO : Stage: files Dec 12 17:40:22.837953 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:22.837953 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:40:22.837953 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:40:22.841501 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:40:22.841501 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:40:22.841501 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:40:22.846020 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:40:22.846020 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:40:22.846020 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:40:22.846020 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:40:22.842224 unknown[1040]: wrote ssh authorized keys file for user: core Dec 12 17:40:22.928549 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:40:23.385494 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:40:23.385494 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:40:23.390145 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:40:23.404189 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:40:23.404189 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:40:23.404189 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:40:23.404189 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:40:23.404189 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:40:23.404189 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:40:23.664965 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:40:23.927358 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:40:23.927358 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:40:23.931614 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:40:23.933618 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:40:23.953601 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:40:23.958864 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:40:23.961949 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:40:23.961949 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:40:23.961949 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:40:23.961949 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:40:23.961949 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:40:23.961949 ignition[1040]: INFO : files: files passed Dec 12 17:40:23.961949 ignition[1040]: INFO : Ignition finished successfully Dec 12 17:40:23.981757 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 12 17:40:23.981784 kernel: audit: type=1130 audit(1765561223.974:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:23.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:23.963513 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:40:23.977980 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:40:23.992186 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:40:23.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:23.996385 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:40:24.005726 kernel: audit: type=1130 audit(1765561223.998:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.005757 kernel: audit: type=1131 audit(1765561223.998:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:23.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:23.996478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:40:24.008838 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:40:24.012288 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:40:24.014671 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:40:24.014671 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:40:24.016233 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:40:24.024228 kernel: audit: type=1130 audit(1765561224.017:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.018706 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:40:24.024608 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:40:24.067405 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:40:24.068712 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:40:24.076455 kernel: audit: type=1130 audit(1765561224.069:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.076485 kernel: audit: type=1131 audit(1765561224.069:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.070406 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:40:24.077622 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:40:24.079850 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:40:24.080892 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:40:24.115613 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:40:24.120934 kernel: audit: type=1130 audit(1765561224.116:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.118135 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:40:24.150336 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:40:24.150528 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:40:24.152972 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:40:24.155329 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:40:24.157390 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:40:24.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.157568 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:40:24.164212 kernel: audit: type=1131 audit(1765561224.158:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.163138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:40:24.165225 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:40:24.167038 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:40:24.169043 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:40:24.172436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:40:24.174515 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:40:24.178701 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:40:24.180707 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:40:24.182827 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:40:24.185989 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:40:24.188007 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:40:24.189025 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:40:24.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.189173 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:40:24.194526 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:40:24.197695 kernel: audit: type=1131 audit(1765561224.189:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.196722 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:40:24.198946 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:40:24.199914 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:40:24.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.201323 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:40:24.208169 kernel: audit: type=1131 audit(1765561224.202:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.201460 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:40:24.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.207228 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:40:24.207371 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:40:24.209506 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:40:24.211396 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:40:24.214868 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:40:24.217055 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:40:24.219119 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:40:24.220698 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:40:24.220795 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:40:24.222504 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:40:24.222611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:40:24.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.224148 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:40:24.224225 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:40:24.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.226026 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:40:24.226151 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:40:24.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.228016 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:40:24.228127 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:40:24.230911 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:40:24.232885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:40:24.233022 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:40:24.255506 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:40:24.256530 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:40:24.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.256677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:40:24.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.258959 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:40:24.259076 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:40:24.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.261203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:40:24.261314 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:40:24.268852 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:40:24.268961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:40:24.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.272619 ignition[1098]: INFO : Ignition 2.22.0 Dec 12 17:40:24.272619 ignition[1098]: INFO : Stage: umount Dec 12 17:40:24.272619 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:24.272619 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:40:24.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.279254 ignition[1098]: INFO : umount: umount passed Dec 12 17:40:24.279254 ignition[1098]: INFO : Ignition finished successfully Dec 12 17:40:24.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.274319 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:40:24.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.274440 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:40:24.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.276159 systemd[1]: Stopped target network.target - Network. Dec 12 17:40:24.277543 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:40:24.277615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:40:24.280423 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:40:24.280485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:40:24.281951 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:40:24.282009 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:40:24.283619 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:40:24.283665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:40:24.285604 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:40:24.287291 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:40:24.289969 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:40:24.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.299842 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:40:24.300001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:40:24.304924 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:40:24.306000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:40:24.305048 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:40:24.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.309543 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:40:24.309676 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:40:24.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.311000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:40:24.312548 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:40:24.313766 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:40:24.313830 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:40:24.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.315909 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:40:24.315968 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:40:24.318683 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:40:24.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.319665 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:40:24.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.319727 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:40:24.322057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:40:24.322111 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:40:24.324070 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:40:24.324117 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:40:24.325987 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:40:24.343023 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:40:24.343195 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:40:24.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.345650 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:40:24.345714 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:40:24.347495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:40:24.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.347526 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:40:24.349249 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:40:24.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.349299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:40:24.352132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:40:24.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.352184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:40:24.355125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:40:24.355184 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:40:24.358933 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:40:24.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.360207 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:40:24.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.360269 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:40:24.362327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:40:24.362372 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:40:24.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.364673 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:40:24.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:24.364719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:24.367758 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:40:24.369830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:40:24.372062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:40:24.372164 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:40:24.374742 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:40:24.376718 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:40:24.399456 systemd[1]: Switching root. Dec 12 17:40:24.439066 systemd-journald[351]: Journal stopped Dec 12 17:40:25.257500 systemd-journald[351]: Received SIGTERM from PID 1 (systemd). Dec 12 17:40:25.257553 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:40:25.257570 kernel: SELinux: policy capability open_perms=1 Dec 12 17:40:25.257582 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:40:25.257593 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:40:25.257603 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:40:25.257613 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:40:25.257626 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:40:25.257639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:40:25.257650 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:40:25.257664 systemd[1]: Successfully loaded SELinux policy in 59.165ms. Dec 12 17:40:25.257679 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.813ms. Dec 12 17:40:25.257691 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:40:25.257703 systemd[1]: Detected virtualization kvm. Dec 12 17:40:25.257715 systemd[1]: Detected architecture arm64. Dec 12 17:40:25.257730 systemd[1]: Detected first boot. Dec 12 17:40:25.257742 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:40:25.257753 zram_generator::config[1143]: No configuration found. Dec 12 17:40:25.257765 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:40:25.257781 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:40:25.257796 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:40:25.257833 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:40:25.257846 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:40:25.257862 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:40:25.257874 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:40:25.257885 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:40:25.257899 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:40:25.257910 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:40:25.257922 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:40:25.257933 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:40:25.257944 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:40:25.257955 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:40:25.257966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:40:25.257977 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:40:25.257989 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:40:25.258000 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:40:25.258011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:40:25.258022 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:40:25.258033 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:40:25.258047 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:40:25.258062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:40:25.258074 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:40:25.258087 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:40:25.258098 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:40:25.258109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:40:25.258120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:40:25.258131 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:40:25.258142 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:40:25.258153 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:40:25.258165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:40:25.258175 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:40:25.258186 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:40:25.258197 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:40:25.258208 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:40:25.258220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:40:25.258231 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:40:25.258242 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:40:25.258253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:40:25.258264 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:40:25.258274 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:40:25.258285 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:40:25.258297 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:40:25.258308 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:40:25.258319 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:40:25.258329 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:40:25.258340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:40:25.258352 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:40:25.258363 systemd[1]: Reached target machines.target - Containers. Dec 12 17:40:25.258375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:40:25.258385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:40:25.258396 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:40:25.258408 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:40:25.258419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:40:25.258429 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:40:25.258441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:40:25.258453 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:40:25.258465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:40:25.258477 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:40:25.258488 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:40:25.258499 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:40:25.258510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:40:25.258523 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:40:25.258534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:40:25.258546 kernel: fuse: init (API version 7.41) Dec 12 17:40:25.258557 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:40:25.258571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:40:25.258582 kernel: ACPI: bus type drm_connector registered Dec 12 17:40:25.258592 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:40:25.258603 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:40:25.258615 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:40:25.258647 systemd-journald[1217]: Collecting audit messages is enabled. Dec 12 17:40:25.258673 systemd-journald[1217]: Journal started Dec 12 17:40:25.258695 systemd-journald[1217]: Runtime Journal (/run/log/journal/a1332660f8d34c14941edae33a2fa734) is 6M, max 48.5M, 42.4M free. Dec 12 17:40:25.118000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:40:25.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.217000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:40:25.217000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:40:25.218000 audit: BPF prog-id=15 op=LOAD Dec 12 17:40:25.219000 audit: BPF prog-id=16 op=LOAD Dec 12 17:40:25.219000 audit: BPF prog-id=17 op=LOAD Dec 12 17:40:25.255000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:40:25.255000 audit[1217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffffd7fc420 a2=4000 a3=0 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:25.255000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:40:25.020263 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:40:25.040035 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:40:25.040509 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:40:25.269200 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:40:25.273368 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:40:25.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.274560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:40:25.275831 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:40:25.276937 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:40:25.278012 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:40:25.279177 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:40:25.280296 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:40:25.281508 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:40:25.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.283051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:40:25.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.284576 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:40:25.284757 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:40:25.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.287290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:40:25.287477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:40:25.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.288977 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:40:25.289142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:40:25.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.290489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:40:25.290676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:40:25.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.292312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:40:25.292475 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:40:25.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.294012 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:40:25.294183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:40:25.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.295612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:40:25.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.297216 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:40:25.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.299490 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:40:25.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.301392 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:40:25.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.314642 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:40:25.316437 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:40:25.319043 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:40:25.321143 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:40:25.322291 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:40:25.322336 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:40:25.324306 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:40:25.325748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:40:25.325883 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:40:25.333712 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:40:25.335910 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:40:25.337025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:40:25.338000 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:40:25.339183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:40:25.341967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:40:25.344041 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:40:25.344570 systemd-journald[1217]: Time spent on flushing to /var/log/journal/a1332660f8d34c14941edae33a2fa734 is 17.690ms for 1006 entries. Dec 12 17:40:25.344570 systemd-journald[1217]: System Journal (/var/log/journal/a1332660f8d34c14941edae33a2fa734) is 8M, max 163.5M, 155.5M free. Dec 12 17:40:25.369732 systemd-journald[1217]: Received client request to flush runtime journal. Dec 12 17:40:25.369767 kernel: loop1: detected capacity change from 0 to 200800 Dec 12 17:40:25.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.347181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:40:25.351269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:40:25.353565 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:40:25.354887 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:40:25.360957 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:40:25.365027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:40:25.368393 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:40:25.373049 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:40:25.374722 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:40:25.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.389898 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:40:25.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.391000 audit: BPF prog-id=18 op=LOAD Dec 12 17:40:25.393768 kernel: loop2: detected capacity change from 0 to 109872 Dec 12 17:40:25.391000 audit: BPF prog-id=19 op=LOAD Dec 12 17:40:25.391000 audit: BPF prog-id=20 op=LOAD Dec 12 17:40:25.393370 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:40:25.394000 audit: BPF prog-id=21 op=LOAD Dec 12 17:40:25.397014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:40:25.401693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:40:25.404000 audit: BPF prog-id=22 op=LOAD Dec 12 17:40:25.404000 audit: BPF prog-id=23 op=LOAD Dec 12 17:40:25.404000 audit: BPF prog-id=24 op=LOAD Dec 12 17:40:25.406206 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:40:25.407000 audit: BPF prog-id=25 op=LOAD Dec 12 17:40:25.407000 audit: BPF prog-id=26 op=LOAD Dec 12 17:40:25.407000 audit: BPF prog-id=27 op=LOAD Dec 12 17:40:25.410012 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:40:25.412738 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:40:25.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.425284 kernel: loop3: detected capacity change from 0 to 100192 Dec 12 17:40:25.440855 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Dec 12 17:40:25.440873 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Dec 12 17:40:25.446968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:40:25.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.451837 kernel: loop4: detected capacity change from 0 to 200800 Dec 12 17:40:25.453305 systemd-nsresourced[1280]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:40:25.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.454638 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:40:25.464901 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:40:25.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.466834 kernel: loop5: detected capacity change from 0 to 109872 Dec 12 17:40:25.472895 kernel: loop6: detected capacity change from 0 to 100192 Dec 12 17:40:25.477234 (sd-merge)[1287]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 12 17:40:25.481054 (sd-merge)[1287]: Merged extensions into '/usr'. Dec 12 17:40:25.487243 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:40:25.487260 systemd[1]: Reloading... Dec 12 17:40:25.514927 systemd-oomd[1277]: No swap; memory pressure usage will be degraded Dec 12 17:40:25.527484 systemd-resolved[1278]: Positive Trust Anchors: Dec 12 17:40:25.527795 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:40:25.527896 systemd-resolved[1278]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:40:25.527968 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:40:25.537340 systemd-resolved[1278]: Defaulting to hostname 'linux'. Dec 12 17:40:25.548832 zram_generator::config[1331]: No configuration found. Dec 12 17:40:25.689473 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:40:25.689654 systemd[1]: Reloading finished in 202 ms. Dec 12 17:40:25.736596 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:40:25.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.738013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:40:25.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.739317 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:40:25.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.744604 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:40:25.758048 systemd[1]: Starting ensure-sysext.service... Dec 12 17:40:25.759917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:40:25.760000 audit: BPF prog-id=28 op=LOAD Dec 12 17:40:25.760000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:40:25.760000 audit: BPF prog-id=29 op=LOAD Dec 12 17:40:25.760000 audit: BPF prog-id=30 op=LOAD Dec 12 17:40:25.760000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:40:25.760000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:40:25.761000 audit: BPF prog-id=31 op=LOAD Dec 12 17:40:25.761000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:40:25.761000 audit: BPF prog-id=32 op=LOAD Dec 12 17:40:25.761000 audit: BPF prog-id=33 op=LOAD Dec 12 17:40:25.761000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:40:25.761000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:40:25.762000 audit: BPF prog-id=34 op=LOAD Dec 12 17:40:25.762000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:40:25.762000 audit: BPF prog-id=35 op=LOAD Dec 12 17:40:25.763000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:40:25.763000 audit: BPF prog-id=36 op=LOAD Dec 12 17:40:25.763000 audit: BPF prog-id=37 op=LOAD Dec 12 17:40:25.763000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:40:25.763000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:40:25.763000 audit: BPF prog-id=38 op=LOAD Dec 12 17:40:25.763000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:40:25.763000 audit: BPF prog-id=39 op=LOAD Dec 12 17:40:25.763000 audit: BPF prog-id=40 op=LOAD Dec 12 17:40:25.763000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:40:25.763000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:40:25.774212 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:40:25.774252 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:40:25.774485 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:40:25.774872 systemd[1]: Reload requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:40:25.774886 systemd[1]: Reloading... Dec 12 17:40:25.775488 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Dec 12 17:40:25.775540 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Dec 12 17:40:25.779503 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:40:25.779512 systemd-tmpfiles[1365]: Skipping /boot Dec 12 17:40:25.786203 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:40:25.786215 systemd-tmpfiles[1365]: Skipping /boot Dec 12 17:40:25.824943 zram_generator::config[1397]: No configuration found. Dec 12 17:40:25.958663 systemd[1]: Reloading finished in 183 ms. Dec 12 17:40:25.969547 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:40:25.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:25.971000 audit: BPF prog-id=41 op=LOAD Dec 12 17:40:25.971000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:40:25.972000 audit: BPF prog-id=42 op=LOAD Dec 12 17:40:25.972000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:40:25.972000 audit: BPF prog-id=43 op=LOAD Dec 12 17:40:25.972000 audit: BPF prog-id=44 op=LOAD Dec 12 17:40:25.972000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:40:25.972000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:40:25.973000 audit: BPF prog-id=45 op=LOAD Dec 12 17:40:25.973000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:40:25.973000 audit: BPF prog-id=46 op=LOAD Dec 12 17:40:25.973000 audit: BPF prog-id=47 op=LOAD Dec 12 17:40:25.973000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:40:25.973000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:40:25.974000 audit: BPF prog-id=48 op=LOAD Dec 12 17:40:25.974000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:40:25.974000 audit: BPF prog-id=49 op=LOAD Dec 12 17:40:25.974000 audit: BPF prog-id=50 op=LOAD Dec 12 17:40:25.974000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:40:25.974000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:40:25.974000 audit: BPF prog-id=51 op=LOAD Dec 12 17:40:25.974000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:40:25.974000 audit: BPF prog-id=52 op=LOAD Dec 12 17:40:25.975000 audit: BPF prog-id=53 op=LOAD Dec 12 17:40:25.985000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:40:25.985000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:40:25.989772 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:40:25.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.008862 systemd[1]: Finished ensure-sysext.service. Dec 12 17:40:26.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.011111 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:40:26.013284 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:40:26.014539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:40:26.034085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:40:26.036420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:40:26.038676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:40:26.040842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:40:26.042541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:40:26.042650 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:40:26.043661 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:40:26.044903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:40:26.046759 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:40:26.048000 audit: BPF prog-id=54 op=LOAD Dec 12 17:40:26.049582 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:40:26.050000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:40:26.050000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:40:26.050000 audit: BPF prog-id=55 op=LOAD Dec 12 17:40:26.050000 audit: BPF prog-id=56 op=LOAD Dec 12 17:40:26.052241 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:40:26.058128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:40:26.061185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:40:26.061408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:40:26.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.063281 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:40:26.063882 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:40:26.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.066337 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:40:26.066524 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:40:26.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.071194 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:40:26.071387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:40:26.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.077066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:40:26.077147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:40:26.084879 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:40:26.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.087325 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:40:26.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.090849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:40:26.093000 audit[1451]: SYSTEM_BOOT pid=1451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.099997 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:40:26.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:26.110000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:40:26.110000 audit[1476]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcd3d2530 a2=420 a3=0 items=0 ppid=1434 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:26.110000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:40:26.112992 augenrules[1476]: No rules Dec 12 17:40:26.113089 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:40:26.113770 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:40:26.118210 systemd-udevd[1450]: Using default interface naming scheme 'v257'. Dec 12 17:40:26.133601 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:40:26.135378 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:40:26.139768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:40:26.144032 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:40:26.199838 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:40:26.223522 systemd-networkd[1490]: lo: Link UP Dec 12 17:40:26.224058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:40:26.224342 systemd-networkd[1490]: lo: Gained carrier Dec 12 17:40:26.226215 systemd-networkd[1490]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:40:26.226490 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:40:26.227092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:40:26.227483 systemd-networkd[1490]: eth0: Link UP Dec 12 17:40:26.227886 systemd-networkd[1490]: eth0: Gained carrier Dec 12 17:40:26.227905 systemd-networkd[1490]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:40:26.228570 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:40:26.230502 systemd[1]: Reached target network.target - Network. Dec 12 17:40:26.233608 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:40:26.241493 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:40:26.244141 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:40:26.244878 systemd-timesyncd[1449]: Network configuration changed, trying to establish connection. Dec 12 17:40:26.245449 systemd-timesyncd[1449]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:40:26.245567 systemd-timesyncd[1449]: Initial clock synchronization to Fri 2025-12-12 17:40:26.609890 UTC. Dec 12 17:40:26.256738 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:40:26.259207 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:40:26.326675 ldconfig[1447]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:40:26.331550 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:40:26.335357 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:40:26.344498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:40:26.353863 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:40:26.410965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:26.413487 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:40:26.414784 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:40:26.415969 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:40:26.417295 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:40:26.418460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:40:26.419691 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:40:26.420988 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:40:26.422013 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:40:26.423174 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:40:26.423209 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:40:26.424041 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:40:26.425744 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:40:26.428232 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:40:26.430984 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:40:26.432345 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:40:26.433553 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:40:26.438683 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:40:26.440131 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:40:26.441912 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:40:26.443021 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:40:26.443925 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:40:26.444825 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:40:26.444860 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:40:26.445892 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:40:26.447910 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:40:26.449884 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:40:26.451937 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:40:26.453909 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:40:26.454957 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:40:26.455959 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:40:26.457881 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:40:26.459167 jq[1547]: false Dec 12 17:40:26.462109 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:40:26.464381 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:40:26.467924 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:40:26.468028 extend-filesystems[1548]: Found /dev/vda6 Dec 12 17:40:26.469021 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:40:26.469472 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:40:26.470068 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:40:26.473099 extend-filesystems[1548]: Found /dev/vda9 Dec 12 17:40:26.474003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:40:26.477281 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:40:26.477849 extend-filesystems[1548]: Checking size of /dev/vda9 Dec 12 17:40:26.478762 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:40:26.479076 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:40:26.480144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:40:26.480359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:40:26.484528 jq[1560]: true Dec 12 17:40:26.493107 extend-filesystems[1548]: Resized partition /dev/vda9 Dec 12 17:40:26.501765 tar[1566]: linux-arm64/LICENSE Dec 12 17:40:26.502738 tar[1566]: linux-arm64/helm Dec 12 17:40:26.506465 extend-filesystems[1591]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:40:26.513080 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:40:26.514838 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:40:26.521928 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 12 17:40:26.521990 jq[1578]: true Dec 12 17:40:26.538147 dbus-daemon[1545]: [system] SELinux support is enabled Dec 12 17:40:26.539005 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:40:26.552676 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:40:26.552882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:40:26.554596 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:40:26.554641 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:40:26.556427 update_engine[1558]: I20251212 17:40:26.554765 1558 main.cc:92] Flatcar Update Engine starting Dec 12 17:40:26.563107 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:40:26.565459 update_engine[1558]: I20251212 17:40:26.563102 1558 update_check_scheduler.cc:74] Next update check in 2m43s Dec 12 17:40:26.567839 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 12 17:40:26.568846 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:40:26.590554 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:40:26.590786 systemd-logind[1556]: New seat seat0. Dec 12 17:40:26.591479 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:40:26.595056 extend-filesystems[1591]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:40:26.595056 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:40:26.595056 extend-filesystems[1591]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 12 17:40:26.607557 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Dec 12 17:40:26.597020 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:40:26.597522 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:40:26.612881 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:40:26.616222 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:40:26.618091 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:40:26.623164 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:40:26.683225 containerd[1582]: time="2025-12-12T17:40:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:40:26.684053 containerd[1582]: time="2025-12-12T17:40:26.683884560Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:40:26.696233 containerd[1582]: time="2025-12-12T17:40:26.696175480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.72µs" Dec 12 17:40:26.696233 containerd[1582]: time="2025-12-12T17:40:26.696215680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:40:26.696340 containerd[1582]: time="2025-12-12T17:40:26.696279880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:40:26.696340 containerd[1582]: time="2025-12-12T17:40:26.696296280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:40:26.696491 containerd[1582]: time="2025-12-12T17:40:26.696444680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:40:26.696491 containerd[1582]: time="2025-12-12T17:40:26.696471080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:40:26.696555 containerd[1582]: time="2025-12-12T17:40:26.696524120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:40:26.696555 containerd[1582]: time="2025-12-12T17:40:26.696539080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697024 containerd[1582]: time="2025-12-12T17:40:26.696953960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697024 containerd[1582]: time="2025-12-12T17:40:26.696982280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697024 containerd[1582]: time="2025-12-12T17:40:26.696998800Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697024 containerd[1582]: time="2025-12-12T17:40:26.697010320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697267 containerd[1582]: time="2025-12-12T17:40:26.697158920Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697267 containerd[1582]: time="2025-12-12T17:40:26.697177520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697267 containerd[1582]: time="2025-12-12T17:40:26.697246920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697484 containerd[1582]: time="2025-12-12T17:40:26.697452680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697518 containerd[1582]: time="2025-12-12T17:40:26.697492280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:40:26.697518 containerd[1582]: time="2025-12-12T17:40:26.697508760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:40:26.697554 containerd[1582]: time="2025-12-12T17:40:26.697538080Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:40:26.698051 containerd[1582]: time="2025-12-12T17:40:26.698009920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:40:26.699131 containerd[1582]: time="2025-12-12T17:40:26.699081880Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:40:26.706420 containerd[1582]: time="2025-12-12T17:40:26.706026240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:40:26.706420 containerd[1582]: time="2025-12-12T17:40:26.706338720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:40:26.706581 containerd[1582]: time="2025-12-12T17:40:26.706549480Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:40:26.706581 containerd[1582]: time="2025-12-12T17:40:26.706572400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:40:26.706640 containerd[1582]: time="2025-12-12T17:40:26.706592320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:40:26.706640 containerd[1582]: time="2025-12-12T17:40:26.706607360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:40:26.706640 containerd[1582]: time="2025-12-12T17:40:26.706620040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:40:26.706640 containerd[1582]: time="2025-12-12T17:40:26.706630200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:40:26.706707 containerd[1582]: time="2025-12-12T17:40:26.706642640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:40:26.706707 containerd[1582]: time="2025-12-12T17:40:26.706655480Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:40:26.706707 containerd[1582]: time="2025-12-12T17:40:26.706667680Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:40:26.706707 containerd[1582]: time="2025-12-12T17:40:26.706678760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:40:26.707009 containerd[1582]: time="2025-12-12T17:40:26.706826720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:40:26.707009 containerd[1582]: time="2025-12-12T17:40:26.706866480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:40:26.707009 containerd[1582]: time="2025-12-12T17:40:26.706996400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:40:26.707084 containerd[1582]: time="2025-12-12T17:40:26.707017680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:40:26.707084 containerd[1582]: time="2025-12-12T17:40:26.707037520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:40:26.707084 containerd[1582]: time="2025-12-12T17:40:26.707048200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:40:26.707084 containerd[1582]: time="2025-12-12T17:40:26.707059720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:40:26.707084 containerd[1582]: time="2025-12-12T17:40:26.707070000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:40:26.707084 containerd[1582]: time="2025-12-12T17:40:26.707081080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:40:26.707184 containerd[1582]: time="2025-12-12T17:40:26.707093200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:40:26.707184 containerd[1582]: time="2025-12-12T17:40:26.707104440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:40:26.707184 containerd[1582]: time="2025-12-12T17:40:26.707114960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:40:26.707184 containerd[1582]: time="2025-12-12T17:40:26.707124720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:40:26.707184 containerd[1582]: time="2025-12-12T17:40:26.707151800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:40:26.707265 containerd[1582]: time="2025-12-12T17:40:26.707188720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:40:26.707265 containerd[1582]: time="2025-12-12T17:40:26.707202440Z" level=info msg="Start snapshots syncer" Dec 12 17:40:26.707265 containerd[1582]: time="2025-12-12T17:40:26.707236120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:40:26.707750 containerd[1582]: time="2025-12-12T17:40:26.707614280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:40:26.707750 containerd[1582]: time="2025-12-12T17:40:26.707674960Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:40:26.707911 containerd[1582]: time="2025-12-12T17:40:26.707767000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:40:26.707936 containerd[1582]: time="2025-12-12T17:40:26.707907360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:40:26.707936 containerd[1582]: time="2025-12-12T17:40:26.707932040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:40:26.707973 containerd[1582]: time="2025-12-12T17:40:26.707943240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:40:26.707973 containerd[1582]: time="2025-12-12T17:40:26.707953320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:40:26.707973 containerd[1582]: time="2025-12-12T17:40:26.707964640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:40:26.708022 containerd[1582]: time="2025-12-12T17:40:26.707976840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:40:26.708022 containerd[1582]: time="2025-12-12T17:40:26.707997880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:40:26.708022 containerd[1582]: time="2025-12-12T17:40:26.708009000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:40:26.708022 containerd[1582]: time="2025-12-12T17:40:26.708019680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:40:26.708088 containerd[1582]: time="2025-12-12T17:40:26.708057920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:40:26.708088 containerd[1582]: time="2025-12-12T17:40:26.708071880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:40:26.708088 containerd[1582]: time="2025-12-12T17:40:26.708079880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:40:26.708137 containerd[1582]: time="2025-12-12T17:40:26.708088880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:40:26.708137 containerd[1582]: time="2025-12-12T17:40:26.708097400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:40:26.708137 containerd[1582]: time="2025-12-12T17:40:26.708108560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:40:26.708137 containerd[1582]: time="2025-12-12T17:40:26.708119600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:40:26.708137 containerd[1582]: time="2025-12-12T17:40:26.708131400Z" level=info msg="runtime interface created" Dec 12 17:40:26.708137 containerd[1582]: time="2025-12-12T17:40:26.708136280Z" level=info msg="created NRI interface" Dec 12 17:40:26.708238 containerd[1582]: time="2025-12-12T17:40:26.708143840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:40:26.708238 containerd[1582]: time="2025-12-12T17:40:26.708155920Z" level=info msg="Connect containerd service" Dec 12 17:40:26.708238 containerd[1582]: time="2025-12-12T17:40:26.708175240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:40:26.709550 containerd[1582]: time="2025-12-12T17:40:26.709437160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:40:26.779967 containerd[1582]: time="2025-12-12T17:40:26.779839880Z" level=info msg="Start subscribing containerd event" Dec 12 17:40:26.779967 containerd[1582]: time="2025-12-12T17:40:26.779918600Z" level=info msg="Start recovering state" Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780008280Z" level=info msg="Start event monitor" Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780028680Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780037840Z" level=info msg="Start streaming server" Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780048720Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780056120Z" level=info msg="runtime interface starting up..." Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780061680Z" level=info msg="starting plugins..." Dec 12 17:40:26.780092 containerd[1582]: time="2025-12-12T17:40:26.780074680Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:40:26.782058 containerd[1582]: time="2025-12-12T17:40:26.782028080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:40:26.782159 containerd[1582]: time="2025-12-12T17:40:26.782140600Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:40:26.782330 containerd[1582]: time="2025-12-12T17:40:26.782312480Z" level=info msg="containerd successfully booted in 0.099429s" Dec 12 17:40:26.782480 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:40:26.867418 tar[1566]: linux-arm64/README.md Dec 12 17:40:26.888175 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:40:27.092768 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:40:27.113642 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:40:27.117282 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:40:27.138598 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:40:27.138962 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:40:27.142995 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:40:27.170762 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:40:27.174986 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:40:27.177440 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:40:27.179053 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:40:27.891563 systemd-networkd[1490]: eth0: Gained IPv6LL Dec 12 17:40:27.895952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:40:27.898528 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:40:27.902362 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:40:27.904824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:27.907040 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:40:27.937444 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:40:27.941406 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:40:27.942931 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:40:27.945006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:40:28.473928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:28.475444 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:40:28.477998 systemd[1]: Startup finished in 1.528s (kernel) + 5.544s (initrd) + 3.914s (userspace) = 10.987s. Dec 12 17:40:28.478341 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:40:28.796508 kubelet[1683]: E1212 17:40:28.796378 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:40:28.798618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:40:28.798759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:40:28.799415 systemd[1]: kubelet.service: Consumed 693ms CPU time, 247.3M memory peak. Dec 12 17:40:31.024782 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:40:31.026033 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:53502.service - OpenSSH per-connection server daemon (10.0.0.1:53502). Dec 12 17:40:31.099646 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 53502 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:31.101514 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.108014 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:40:31.109091 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:40:31.114093 systemd-logind[1556]: New session 1 of user core. Dec 12 17:40:31.128761 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:40:31.133761 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:40:31.149448 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:40:31.152102 systemd-logind[1556]: New session c1 of user core. Dec 12 17:40:31.263001 systemd[1701]: Queued start job for default target default.target. Dec 12 17:40:31.285124 systemd[1701]: Created slice app.slice - User Application Slice. Dec 12 17:40:31.285166 systemd[1701]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:40:31.285179 systemd[1701]: Reached target paths.target - Paths. Dec 12 17:40:31.285238 systemd[1701]: Reached target timers.target - Timers. Dec 12 17:40:31.286675 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:40:31.287506 systemd[1701]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:40:31.298062 systemd[1701]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:40:31.300588 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:40:31.300709 systemd[1701]: Reached target sockets.target - Sockets. Dec 12 17:40:31.300758 systemd[1701]: Reached target basic.target - Basic System. Dec 12 17:40:31.300795 systemd[1701]: Reached target default.target - Main User Target. Dec 12 17:40:31.300822 systemd[1701]: Startup finished in 142ms. Dec 12 17:40:31.301061 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:40:31.302557 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:40:31.318283 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:53508.service - OpenSSH per-connection server daemon (10.0.0.1:53508). Dec 12 17:40:31.384956 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 53508 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:31.386436 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.390875 systemd-logind[1556]: New session 2 of user core. Dec 12 17:40:31.398034 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:40:31.409518 sshd[1717]: Connection closed by 10.0.0.1 port 53508 Dec 12 17:40:31.409865 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:31.422095 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:53508.service: Deactivated successfully. Dec 12 17:40:31.423716 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:40:31.426022 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:40:31.428180 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:53514.service - OpenSSH per-connection server daemon (10.0.0.1:53514). Dec 12 17:40:31.429474 systemd-logind[1556]: Removed session 2. Dec 12 17:40:31.483169 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 53514 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:31.484471 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.488799 systemd-logind[1556]: New session 3 of user core. Dec 12 17:40:31.501618 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:40:31.509547 sshd[1727]: Connection closed by 10.0.0.1 port 53514 Dec 12 17:40:31.510024 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:31.522084 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:53514.service: Deactivated successfully. Dec 12 17:40:31.529601 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:40:31.530480 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:40:31.536579 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:53518.service - OpenSSH per-connection server daemon (10.0.0.1:53518). Dec 12 17:40:31.537786 systemd-logind[1556]: Removed session 3. Dec 12 17:40:31.604226 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 53518 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:31.607976 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.612549 systemd-logind[1556]: New session 4 of user core. Dec 12 17:40:31.624103 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:40:31.638926 sshd[1738]: Connection closed by 10.0.0.1 port 53518 Dec 12 17:40:31.639241 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:31.654685 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:53518.service: Deactivated successfully. Dec 12 17:40:31.656568 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:40:31.659212 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:40:31.660042 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:53528.service - OpenSSH per-connection server daemon (10.0.0.1:53528). Dec 12 17:40:31.661183 systemd-logind[1556]: Removed session 4. Dec 12 17:40:31.730178 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 53528 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:31.732052 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.737182 systemd-logind[1556]: New session 5 of user core. Dec 12 17:40:31.748586 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:40:31.767601 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:40:31.770016 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:31.785760 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 17:40:31.787720 sshd[1747]: Connection closed by 10.0.0.1 port 53528 Dec 12 17:40:31.788331 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:31.798051 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:53528.service: Deactivated successfully. Dec 12 17:40:31.802981 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:40:31.803818 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:40:31.813855 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:53530.service - OpenSSH per-connection server daemon (10.0.0.1:53530). Dec 12 17:40:31.814426 systemd-logind[1556]: Removed session 5. Dec 12 17:40:31.865920 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:31.867231 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.872691 systemd-logind[1556]: New session 6 of user core. Dec 12 17:40:31.884017 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:40:31.897793 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:40:31.898072 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:31.939872 sudo[1759]: pam_unix(sudo:session): session closed for user root Dec 12 17:40:31.946270 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:40:31.946790 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:31.956500 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:40:31.992000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:40:31.993023 augenrules[1781]: No rules Dec 12 17:40:31.995139 kernel: kauditd_printk_skb: 176 callbacks suppressed Dec 12 17:40:31.995179 kernel: audit: type=1305 audit(1765561231.992:219): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:40:31.992000 audit[1781]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffef52c610 a2=420 a3=0 items=0 ppid=1762 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:31.996241 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:40:31.996488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:40:31.997434 sudo[1758]: pam_unix(sudo:session): session closed for user root Dec 12 17:40:31.999208 sshd[1757]: Connection closed by 10.0.0.1 port 53530 Dec 12 17:40:31.999541 kernel: audit: type=1300 audit(1765561231.992:219): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffef52c610 a2=420 a3=0 items=0 ppid=1762 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:31.999584 kernel: audit: type=1327 audit(1765561231.992:219): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:40:31.992000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:40:31.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.001205 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:32.005635 kernel: audit: type=1130 audit(1765561231.994:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.005697 kernel: audit: type=1131 audit(1765561231.994:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:31.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.006064 kernel: audit: type=1106 audit(1765561231.994:222): pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:31.994000 audit[1758]: USER_END pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.008841 kernel: audit: type=1104 audit(1765561231.994:223): pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:31.994000 audit[1758]: CRED_DISP pid=1758 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.000000 audit[1754]: USER_END pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.015463 kernel: audit: type=1106 audit(1765561232.000:224): pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.015516 kernel: audit: type=1104 audit(1765561232.000:225): pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.000000 audit[1754]: CRED_DISP pid=1754 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.131:22-10.0.0.1:53530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.023066 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:53530.service: Deactivated successfully. Dec 12 17:40:32.024674 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:40:32.026850 kernel: audit: type=1131 audit(1765561232.021:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.131:22-10.0.0.1:53530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.028132 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:40:32.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.131:22-10.0.0.1:53546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.030089 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:53546.service - OpenSSH per-connection server daemon (10.0.0.1:53546). Dec 12 17:40:32.031177 systemd-logind[1556]: Removed session 6. Dec 12 17:40:32.085000 audit[1790]: USER_ACCT pid=1790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.086752 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 53546 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:40:32.086000 audit[1790]: CRED_ACQ pid=1790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.086000 audit[1790]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0f65b30 a2=3 a3=0 items=0 ppid=1 pid=1790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.086000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:40:32.087943 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:32.092897 systemd-logind[1556]: New session 7 of user core. Dec 12 17:40:32.103861 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:40:32.110000 audit[1790]: USER_START pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.111000 audit[1793]: CRED_ACQ pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:40:32.118000 audit[1794]: USER_ACCT pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.119856 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:40:32.119000 audit[1794]: CRED_REFR pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.120541 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:32.121000 audit[1794]: USER_START pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:40:32.412385 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:40:32.425136 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:40:32.626783 dockerd[1815]: time="2025-12-12T17:40:32.626714196Z" level=info msg="Starting up" Dec 12 17:40:32.628406 dockerd[1815]: time="2025-12-12T17:40:32.628356705Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:40:32.640377 dockerd[1815]: time="2025-12-12T17:40:32.640312083Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:40:32.790980 dockerd[1815]: time="2025-12-12T17:40:32.790839423Z" level=info msg="Loading containers: start." Dec 12 17:40:32.800853 kernel: Initializing XFRM netlink socket Dec 12 17:40:32.843000 audit[1871]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.843000 audit[1871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffc11a6f0 a2=0 a3=0 items=0 ppid=1815 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:40:32.845000 audit[1873]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.845000 audit[1873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd57ca170 a2=0 a3=0 items=0 ppid=1815 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:40:32.847000 audit[1875]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.847000 audit[1875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb3336a0 a2=0 a3=0 items=0 ppid=1815 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:40:32.849000 audit[1877]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.849000 audit[1877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe23463c0 a2=0 a3=0 items=0 ppid=1815 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:40:32.850000 audit[1879]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.850000 audit[1879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffedc35200 a2=0 a3=0 items=0 ppid=1815 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:40:32.852000 audit[1881]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.852000 audit[1881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdd0d8880 a2=0 a3=0 items=0 ppid=1815 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:40:32.855000 audit[1883]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.855000 audit[1883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe2a6bb50 a2=0 a3=0 items=0 ppid=1815 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:40:32.857000 audit[1885]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.857000 audit[1885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffec09fc10 a2=0 a3=0 items=0 ppid=1815 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:40:32.884000 audit[1888]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.884000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffdc5554d0 a2=0 a3=0 items=0 ppid=1815 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:40:32.886000 audit[1890]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.886000 audit[1890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdb725870 a2=0 a3=0 items=0 ppid=1815 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:40:32.888000 audit[1892]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.888000 audit[1892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffeae45c50 a2=0 a3=0 items=0 ppid=1815 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:40:32.889000 audit[1894]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.889000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd0308760 a2=0 a3=0 items=0 ppid=1815 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:40:32.891000 audit[1896]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.891000 audit[1896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff14abc30 a2=0 a3=0 items=0 ppid=1815 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:40:32.927000 audit[1926]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.927000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff4aadfa0 a2=0 a3=0 items=0 ppid=1815 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:40:32.929000 audit[1928]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.929000 audit[1928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff8b133c0 a2=0 a3=0 items=0 ppid=1815 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:40:32.931000 audit[1930]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.931000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc3ed110 a2=0 a3=0 items=0 ppid=1815 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:40:32.933000 audit[1932]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.933000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe58d3560 a2=0 a3=0 items=0 ppid=1815 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.933000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:40:32.934000 audit[1934]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.934000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4bfe1f0 a2=0 a3=0 items=0 ppid=1815 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:40:32.936000 audit[1936]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.936000 audit[1936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc14c6370 a2=0 a3=0 items=0 ppid=1815 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:40:32.938000 audit[1938]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.938000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe5f46540 a2=0 a3=0 items=0 ppid=1815 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:40:32.940000 audit[1940]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.940000 audit[1940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffefaff770 a2=0 a3=0 items=0 ppid=1815 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:40:32.943000 audit[1942]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.943000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffffdecd80 a2=0 a3=0 items=0 ppid=1815 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.943000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:40:32.945000 audit[1944]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.945000 audit[1944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc98dd680 a2=0 a3=0 items=0 ppid=1815 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:40:32.947000 audit[1946]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.947000 audit[1946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffea29c0d0 a2=0 a3=0 items=0 ppid=1815 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:40:32.949000 audit[1948]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.949000 audit[1948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdf6b6c60 a2=0 a3=0 items=0 ppid=1815 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:40:32.951000 audit[1950]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.951000 audit[1950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe58fe230 a2=0 a3=0 items=0 ppid=1815 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:40:32.956000 audit[1955]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.956000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffd676750 a2=0 a3=0 items=0 ppid=1815 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:40:32.958000 audit[1957]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.958000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff7658ed0 a2=0 a3=0 items=0 ppid=1815 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:40:32.961000 audit[1959]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.961000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffdfa7f40 a2=0 a3=0 items=0 ppid=1815 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:40:32.963000 audit[1961]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.963000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffccb1090 a2=0 a3=0 items=0 ppid=1815 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:40:32.965000 audit[1963]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.965000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffefcbf440 a2=0 a3=0 items=0 ppid=1815 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:40:32.967000 audit[1965]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:32.967000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd3bfa650 a2=0 a3=0 items=0 ppid=1815 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:40:32.980000 audit[1970]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.980000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc3208af0 a2=0 a3=0 items=0 ppid=1815 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:40:32.983000 audit[1972]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.983000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd5b8f230 a2=0 a3=0 items=0 ppid=1815 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:40:32.992000 audit[1980]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:32.992000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc8059e80 a2=0 a3=0 items=0 ppid=1815 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:32.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:40:33.000000 audit[1986]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:33.000000 audit[1986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe092cb90 a2=0 a3=0 items=0 ppid=1815 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:33.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:40:33.002000 audit[1988]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:33.002000 audit[1988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe1faf960 a2=0 a3=0 items=0 ppid=1815 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:33.002000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:40:33.004000 audit[1990]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:33.004000 audit[1990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc2291a10 a2=0 a3=0 items=0 ppid=1815 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:33.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:40:33.006000 audit[1992]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:33.006000 audit[1992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffecacbb90 a2=0 a3=0 items=0 ppid=1815 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:33.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:40:33.008000 audit[1994]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:33.008000 audit[1994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffed4d8910 a2=0 a3=0 items=0 ppid=1815 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:33.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:40:33.009901 systemd-networkd[1490]: docker0: Link UP Dec 12 17:40:33.013070 dockerd[1815]: time="2025-12-12T17:40:33.013012699Z" level=info msg="Loading containers: done." Dec 12 17:40:33.027347 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck65118093-merged.mount: Deactivated successfully. Dec 12 17:40:33.034466 dockerd[1815]: time="2025-12-12T17:40:33.034406816Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:40:33.034600 dockerd[1815]: time="2025-12-12T17:40:33.034489391Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:40:33.034672 dockerd[1815]: time="2025-12-12T17:40:33.034635029Z" level=info msg="Initializing buildkit" Dec 12 17:40:33.056542 dockerd[1815]: time="2025-12-12T17:40:33.056439612Z" level=info msg="Completed buildkit initialization" Dec 12 17:40:33.061006 dockerd[1815]: time="2025-12-12T17:40:33.060977017Z" level=info msg="Daemon has completed initialization" Dec 12 17:40:33.061157 dockerd[1815]: time="2025-12-12T17:40:33.061039347Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:40:33.061423 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:40:33.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:33.466922 containerd[1582]: time="2025-12-12T17:40:33.466802613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:40:33.996765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055967497.mount: Deactivated successfully. Dec 12 17:40:34.734685 containerd[1582]: time="2025-12-12T17:40:34.734640098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:34.735374 containerd[1582]: time="2025-12-12T17:40:34.735326606Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=0" Dec 12 17:40:34.736236 containerd[1582]: time="2025-12-12T17:40:34.736213149Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:34.739843 containerd[1582]: time="2025-12-12T17:40:34.739479282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:34.740813 containerd[1582]: time="2025-12-12T17:40:34.740788089Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.273930474s" Dec 12 17:40:34.740954 containerd[1582]: time="2025-12-12T17:40:34.740936945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:40:34.741636 containerd[1582]: time="2025-12-12T17:40:34.741615676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:40:35.829915 containerd[1582]: time="2025-12-12T17:40:35.829867018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:35.830508 containerd[1582]: time="2025-12-12T17:40:35.830488112Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Dec 12 17:40:35.831645 containerd[1582]: time="2025-12-12T17:40:35.831599036Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:35.834128 containerd[1582]: time="2025-12-12T17:40:35.834089953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:35.835260 containerd[1582]: time="2025-12-12T17:40:35.835193687Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.093479988s" Dec 12 17:40:35.835260 containerd[1582]: time="2025-12-12T17:40:35.835223750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:40:35.835698 containerd[1582]: time="2025-12-12T17:40:35.835669298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:40:36.684369 containerd[1582]: time="2025-12-12T17:40:36.684323785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:36.685212 containerd[1582]: time="2025-12-12T17:40:36.684826093Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Dec 12 17:40:36.686012 containerd[1582]: time="2025-12-12T17:40:36.685954259Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:36.688599 containerd[1582]: time="2025-12-12T17:40:36.688567697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:36.689660 containerd[1582]: time="2025-12-12T17:40:36.689630703Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 853.93156ms" Dec 12 17:40:36.689756 containerd[1582]: time="2025-12-12T17:40:36.689737830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:40:36.690301 containerd[1582]: time="2025-12-12T17:40:36.690277767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:40:37.722308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067279705.mount: Deactivated successfully. Dec 12 17:40:37.884193 containerd[1582]: time="2025-12-12T17:40:37.884136155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:37.884938 containerd[1582]: time="2025-12-12T17:40:37.884897655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=9841285" Dec 12 17:40:37.885890 containerd[1582]: time="2025-12-12T17:40:37.885860743Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:37.887729 containerd[1582]: time="2025-12-12T17:40:37.887703813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:37.888206 containerd[1582]: time="2025-12-12T17:40:37.888173901Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.197863792s" Dec 12 17:40:37.888254 containerd[1582]: time="2025-12-12T17:40:37.888208916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:40:37.889165 containerd[1582]: time="2025-12-12T17:40:37.889144397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:40:38.347260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552677342.mount: Deactivated successfully. Dec 12 17:40:39.049269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:40:39.051466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:39.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:39.193903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:39.197193 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:40:39.197273 kernel: audit: type=1130 audit(1765561239.192:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:39.198316 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:40:39.383153 containerd[1582]: time="2025-12-12T17:40:39.383044414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:39.383880 containerd[1582]: time="2025-12-12T17:40:39.383811066Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19575910" Dec 12 17:40:39.384873 containerd[1582]: time="2025-12-12T17:40:39.384821172Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:39.387829 containerd[1582]: time="2025-12-12T17:40:39.387602384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:39.390222 containerd[1582]: time="2025-12-12T17:40:39.390181034Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.500904227s" Dec 12 17:40:39.390222 containerd[1582]: time="2025-12-12T17:40:39.390224953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:40:39.390762 containerd[1582]: time="2025-12-12T17:40:39.390621923Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:40:39.393555 kubelet[2172]: E1212 17:40:39.393462 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:40:39.397122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:40:39.397358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:40:39.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:40:39.398025 systemd[1]: kubelet.service: Consumed 151ms CPU time, 106M memory peak. Dec 12 17:40:39.400831 kernel: audit: type=1131 audit(1765561239.397:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:40:39.851499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664783871.mount: Deactivated successfully. Dec 12 17:40:39.857827 containerd[1582]: time="2025-12-12T17:40:39.857739633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:39.858435 containerd[1582]: time="2025-12-12T17:40:39.858386839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 12 17:40:39.859379 containerd[1582]: time="2025-12-12T17:40:39.859354801Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:39.861711 containerd[1582]: time="2025-12-12T17:40:39.861413281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:39.862507 containerd[1582]: time="2025-12-12T17:40:39.862443893Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 471.526724ms" Dec 12 17:40:39.862507 containerd[1582]: time="2025-12-12T17:40:39.862488943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:40:39.863215 containerd[1582]: time="2025-12-12T17:40:39.863193228Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:40:40.691613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459674108.mount: Deactivated successfully. Dec 12 17:40:42.504528 containerd[1582]: time="2025-12-12T17:40:42.504477236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:42.505521 containerd[1582]: time="2025-12-12T17:40:42.505251447Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Dec 12 17:40:42.506355 containerd[1582]: time="2025-12-12T17:40:42.506305568Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:42.509973 containerd[1582]: time="2025-12-12T17:40:42.509921946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:40:42.510568 containerd[1582]: time="2025-12-12T17:40:42.510541612Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.647319668s" Dec 12 17:40:42.510603 containerd[1582]: time="2025-12-12T17:40:42.510574131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:40:47.636425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:47.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:47.636580 systemd[1]: kubelet.service: Consumed 151ms CPU time, 106M memory peak. Dec 12 17:40:47.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:47.638671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:47.641913 kernel: audit: type=1130 audit(1765561247.635:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:47.642006 kernel: audit: type=1131 audit(1765561247.635:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:47.667232 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-7.scope)... Dec 12 17:40:47.667251 systemd[1]: Reloading... Dec 12 17:40:47.759837 zram_generator::config[2317]: No configuration found. Dec 12 17:40:47.970508 systemd[1]: Reloading finished in 302 ms. Dec 12 17:40:47.999000 audit: BPF prog-id=61 op=LOAD Dec 12 17:40:47.999000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:40:48.003027 kernel: audit: type=1334 audit(1765561247.999:281): prog-id=61 op=LOAD Dec 12 17:40:48.003084 kernel: audit: type=1334 audit(1765561247.999:282): prog-id=42 op=UNLOAD Dec 12 17:40:48.003104 kernel: audit: type=1334 audit(1765561247.999:283): prog-id=62 op=LOAD Dec 12 17:40:47.999000 audit: BPF prog-id=62 op=LOAD Dec 12 17:40:48.003865 kernel: audit: type=1334 audit(1765561247.999:284): prog-id=63 op=LOAD Dec 12 17:40:47.999000 audit: BPF prog-id=63 op=LOAD Dec 12 17:40:47.999000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:40:48.005510 kernel: audit: type=1334 audit(1765561247.999:285): prog-id=43 op=UNLOAD Dec 12 17:40:48.005544 kernel: audit: type=1334 audit(1765561247.999:286): prog-id=44 op=UNLOAD Dec 12 17:40:48.005564 kernel: audit: type=1334 audit(1765561248.001:287): prog-id=64 op=LOAD Dec 12 17:40:47.999000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:40:48.001000 audit: BPF prog-id=64 op=LOAD Dec 12 17:40:48.008058 kernel: audit: type=1334 audit(1765561248.001:288): prog-id=57 op=UNLOAD Dec 12 17:40:48.001000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:40:48.002000 audit: BPF prog-id=65 op=LOAD Dec 12 17:40:48.002000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:40:48.003000 audit: BPF prog-id=66 op=LOAD Dec 12 17:40:48.003000 audit: BPF prog-id=67 op=LOAD Dec 12 17:40:48.003000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:40:48.003000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:40:48.007000 audit: BPF prog-id=68 op=LOAD Dec 12 17:40:48.025000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:40:48.025000 audit: BPF prog-id=69 op=LOAD Dec 12 17:40:48.025000 audit: BPF prog-id=70 op=LOAD Dec 12 17:40:48.025000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:40:48.025000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:40:48.026000 audit: BPF prog-id=71 op=LOAD Dec 12 17:40:48.026000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:40:48.026000 audit: BPF prog-id=72 op=LOAD Dec 12 17:40:48.026000 audit: BPF prog-id=73 op=LOAD Dec 12 17:40:48.026000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:40:48.026000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:40:48.027000 audit: BPF prog-id=74 op=LOAD Dec 12 17:40:48.027000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:40:48.027000 audit: BPF prog-id=75 op=LOAD Dec 12 17:40:48.027000 audit: BPF prog-id=76 op=LOAD Dec 12 17:40:48.027000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:40:48.027000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:40:48.027000 audit: BPF prog-id=77 op=LOAD Dec 12 17:40:48.027000 audit: BPF prog-id=78 op=LOAD Dec 12 17:40:48.027000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:40:48.027000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:40:48.028000 audit: BPF prog-id=79 op=LOAD Dec 12 17:40:48.028000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:40:48.029000 audit: BPF prog-id=80 op=LOAD Dec 12 17:40:48.029000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:40:48.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:48.043213 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:48.046407 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:40:48.046725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:48.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:48.046793 systemd[1]: kubelet.service: Consumed 105ms CPU time, 95M memory peak. Dec 12 17:40:48.048683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:48.184860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:48.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:48.200188 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:40:48.236992 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:40:48.236992 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:40:48.237640 kubelet[2361]: I1212 17:40:48.237547 2361 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:40:48.676501 kubelet[2361]: I1212 17:40:48.676378 2361 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:40:48.676501 kubelet[2361]: I1212 17:40:48.676413 2361 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:40:48.676501 kubelet[2361]: I1212 17:40:48.676435 2361 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:40:48.676501 kubelet[2361]: I1212 17:40:48.676441 2361 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:40:48.676951 kubelet[2361]: I1212 17:40:48.676709 2361 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:40:48.776484 kubelet[2361]: E1212 17:40:48.776429 2361 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:40:48.777323 kubelet[2361]: I1212 17:40:48.777300 2361 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:40:48.782769 kubelet[2361]: I1212 17:40:48.782724 2361 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:40:48.785505 kubelet[2361]: I1212 17:40:48.785486 2361 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:40:48.785730 kubelet[2361]: I1212 17:40:48.785700 2361 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:40:48.785949 kubelet[2361]: I1212 17:40:48.785732 2361 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:40:48.786063 kubelet[2361]: I1212 17:40:48.785952 2361 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:40:48.786063 kubelet[2361]: I1212 17:40:48.785961 2361 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:40:48.786113 kubelet[2361]: I1212 17:40:48.786103 2361 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:40:48.788433 kubelet[2361]: I1212 17:40:48.788409 2361 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:40:48.789675 kubelet[2361]: I1212 17:40:48.789652 2361 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:40:48.789719 kubelet[2361]: I1212 17:40:48.789683 2361 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:40:48.789719 kubelet[2361]: I1212 17:40:48.789711 2361 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:40:48.789788 kubelet[2361]: I1212 17:40:48.789723 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:40:48.790787 kubelet[2361]: E1212 17:40:48.790744 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:40:48.790912 kubelet[2361]: E1212 17:40:48.790889 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:40:48.790969 kubelet[2361]: I1212 17:40:48.790926 2361 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:40:48.791627 kubelet[2361]: I1212 17:40:48.791592 2361 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:40:48.791627 kubelet[2361]: I1212 17:40:48.791629 2361 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:40:48.791733 kubelet[2361]: W1212 17:40:48.791673 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:40:48.796193 kubelet[2361]: I1212 17:40:48.796162 2361 server.go:1262] "Started kubelet" Dec 12 17:40:48.797141 kubelet[2361]: I1212 17:40:48.797116 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.800513 2361 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.800705 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.800890 2361 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.800985 2361 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.801397 2361 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:40:48.802836 kubelet[2361]: E1212 17:40:48.801495 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:40:48.802836 kubelet[2361]: E1212 17:40:48.801900 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.801925 2361 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:40:48.802836 kubelet[2361]: I1212 17:40:48.802762 2361 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:40:48.803115 kubelet[2361]: I1212 17:40:48.797976 2361 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:40:48.803190 kubelet[2361]: I1212 17:40:48.803162 2361 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:40:48.803267 kubelet[2361]: I1212 17:40:48.803257 2361 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:40:48.803436 kubelet[2361]: I1212 17:40:48.803417 2361 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:40:48.803657 kubelet[2361]: E1212 17:40:48.801969 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808899f7af86b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:40:48.796083895 +0000 UTC m=+0.592650123,LastTimestamp:2025-12-12 17:40:48.796083895 +0000 UTC m=+0.592650123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:40:48.803863 kubelet[2361]: I1212 17:40:48.803795 2361 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:40:48.804157 kubelet[2361]: E1212 17:40:48.804120 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:40:48.804749 kubelet[2361]: E1212 17:40:48.804728 2361 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:40:48.806000 audit[2379]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.806000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd06f13a0 a2=0 a3=0 items=0 ppid=2361 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:40:48.807000 audit[2380]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.807000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda505900 a2=0 a3=0 items=0 ppid=2361 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:40:48.809000 audit[2382]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.809000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe15212b0 a2=0 a3=0 items=0 ppid=2361 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:40:48.812000 audit[2384]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.812000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdd90cb10 a2=0 a3=0 items=0 ppid=2361 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:40:48.817323 kubelet[2361]: I1212 17:40:48.817303 2361 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:40:48.817323 kubelet[2361]: I1212 17:40:48.817318 2361 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:40:48.817477 kubelet[2361]: I1212 17:40:48.817338 2361 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:40:48.818964 kubelet[2361]: I1212 17:40:48.818931 2361 policy_none.go:49] "None policy: Start" Dec 12 17:40:48.818964 kubelet[2361]: I1212 17:40:48.818960 2361 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:40:48.819079 kubelet[2361]: I1212 17:40:48.818977 2361 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:40:48.820764 kubelet[2361]: I1212 17:40:48.820740 2361 policy_none.go:47] "Start" Dec 12 17:40:48.822000 audit[2391]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.822000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd7b44d60 a2=0 a3=0 items=0 ppid=2361 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 12 17:40:48.823537 kubelet[2361]: I1212 17:40:48.823497 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:40:48.824000 audit[2394]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.826528 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:40:48.824000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2d33640 a2=0 a3=0 items=0 ppid=2361 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:40:48.826000 audit[2393]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:48.826000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff530c7e0 a2=0 a3=0 items=0 ppid=2361 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:40:48.828072 kubelet[2361]: I1212 17:40:48.828037 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:40:48.828072 kubelet[2361]: I1212 17:40:48.828071 2361 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:40:48.828139 kubelet[2361]: I1212 17:40:48.828096 2361 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:40:48.828161 kubelet[2361]: E1212 17:40:48.828145 2361 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:40:48.827000 audit[2395]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.827000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4457590 a2=0 a3=0 items=0 ppid=2361 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:40:48.829499 kubelet[2361]: E1212 17:40:48.829463 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:40:48.829000 audit[2397]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:48.829000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffded92370 a2=0 a3=0 items=0 ppid=2361 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:40:48.829000 audit[2398]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:40:48.829000 audit[2398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde834110 a2=0 a3=0 items=0 ppid=2361 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:40:48.830000 audit[2399]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:48.830000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc39f3140 a2=0 a3=0 items=0 ppid=2361 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:40:48.832000 audit[2400]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:40:48.832000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffedb4c390 a2=0 a3=0 items=0 ppid=2361 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:48.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:40:48.838486 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:40:48.848757 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:40:48.850842 kubelet[2361]: E1212 17:40:48.850592 2361 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:40:48.851249 kubelet[2361]: I1212 17:40:48.851214 2361 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:40:48.851313 kubelet[2361]: I1212 17:40:48.851238 2361 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:40:48.852169 kubelet[2361]: I1212 17:40:48.852102 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:40:48.852951 kubelet[2361]: E1212 17:40:48.852932 2361 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:40:48.853019 kubelet[2361]: E1212 17:40:48.852985 2361 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:40:48.939833 systemd[1]: Created slice kubepods-burstable-pod13618bc4b2f1ece5807e228b3b9a3a9e.slice - libcontainer container kubepods-burstable-pod13618bc4b2f1ece5807e228b3b9a3a9e.slice. Dec 12 17:40:48.954398 kubelet[2361]: I1212 17:40:48.954319 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:40:48.954883 kubelet[2361]: E1212 17:40:48.954845 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Dec 12 17:40:48.957780 kubelet[2361]: E1212 17:40:48.957441 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:48.961064 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 12 17:40:48.973150 kubelet[2361]: E1212 17:40:48.973113 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:48.975715 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 12 17:40:48.977719 kubelet[2361]: E1212 17:40:48.977694 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:49.003140 kubelet[2361]: E1212 17:40:49.003103 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Dec 12 17:40:49.004245 kubelet[2361]: I1212 17:40:49.004218 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13618bc4b2f1ece5807e228b3b9a3a9e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13618bc4b2f1ece5807e228b3b9a3a9e\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:49.004342 kubelet[2361]: I1212 17:40:49.004326 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:49.004418 kubelet[2361]: I1212 17:40:49.004406 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:49.004526 kubelet[2361]: I1212 17:40:49.004496 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:49.004596 kubelet[2361]: I1212 17:40:49.004539 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13618bc4b2f1ece5807e228b3b9a3a9e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13618bc4b2f1ece5807e228b3b9a3a9e\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:49.004596 kubelet[2361]: I1212 17:40:49.004561 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13618bc4b2f1ece5807e228b3b9a3a9e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13618bc4b2f1ece5807e228b3b9a3a9e\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:49.004596 kubelet[2361]: I1212 17:40:49.004581 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:49.004596 kubelet[2361]: I1212 17:40:49.004594 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:49.004669 kubelet[2361]: I1212 17:40:49.004617 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:49.156431 kubelet[2361]: I1212 17:40:49.156392 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:40:49.156835 kubelet[2361]: E1212 17:40:49.156784 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Dec 12 17:40:49.260448 kubelet[2361]: E1212 17:40:49.260331 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:49.261376 containerd[1582]: time="2025-12-12T17:40:49.261213575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13618bc4b2f1ece5807e228b3b9a3a9e,Namespace:kube-system,Attempt:0,}" Dec 12 17:40:49.275744 kubelet[2361]: E1212 17:40:49.275643 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:49.276198 containerd[1582]: time="2025-12-12T17:40:49.276162700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 12 17:40:49.280836 kubelet[2361]: E1212 17:40:49.280766 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:49.281253 containerd[1582]: time="2025-12-12T17:40:49.281215790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 12 17:40:49.404233 kubelet[2361]: E1212 17:40:49.404192 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Dec 12 17:40:49.558684 kubelet[2361]: I1212 17:40:49.558298 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:40:49.558777 kubelet[2361]: E1212 17:40:49.558689 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Dec 12 17:40:49.672790 kubelet[2361]: E1212 17:40:49.672722 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:40:49.736637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555952144.mount: Deactivated successfully. Dec 12 17:40:49.744010 containerd[1582]: time="2025-12-12T17:40:49.743948289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:40:49.746484 containerd[1582]: time="2025-12-12T17:40:49.746409136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:40:49.749859 containerd[1582]: time="2025-12-12T17:40:49.749135503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:40:49.750863 containerd[1582]: time="2025-12-12T17:40:49.750782750Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:40:49.751363 containerd[1582]: time="2025-12-12T17:40:49.751310301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:40:49.751998 containerd[1582]: time="2025-12-12T17:40:49.751972255Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:40:49.752762 containerd[1582]: time="2025-12-12T17:40:49.752711075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:40:49.753860 containerd[1582]: time="2025-12-12T17:40:49.753821068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:40:49.755526 containerd[1582]: time="2025-12-12T17:40:49.755123124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 491.634309ms" Dec 12 17:40:49.758787 containerd[1582]: time="2025-12-12T17:40:49.758739675Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.219174ms" Dec 12 17:40:49.759747 containerd[1582]: time="2025-12-12T17:40:49.759713099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 481.609003ms" Dec 12 17:40:49.785047 containerd[1582]: time="2025-12-12T17:40:49.784984927Z" level=info msg="connecting to shim 4ec5fe64f20dcdbfca2b7dc54524646e3c9a7502e8c8be4564b0ffcc66318081" address="unix:///run/containerd/s/8692c16f4e09927e01c0b24701dc61ffb5bb2823d83eb10c86a0f7ee14e22032" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:40:49.793826 kubelet[2361]: E1212 17:40:49.793371 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:40:49.795245 containerd[1582]: time="2025-12-12T17:40:49.795195720Z" level=info msg="connecting to shim 40cf3f5a6af21080c8941d2e1f62df103ff91ca7f6c2bb8f123f9ea6e0ba999f" address="unix:///run/containerd/s/a8a52813c0df3db695a44217f789858299eba2a5e4684fd43293a50d4eda678b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:40:49.795978 containerd[1582]: time="2025-12-12T17:40:49.795942719Z" level=info msg="connecting to shim 0a098a367a8ed27476f3149c6adc5965a07bbc88572faf857112e3c48190e878" address="unix:///run/containerd/s/de77886374f0992b3c6a30e0448e01bb2f8e4e80441b6a70bec50218b7a938cc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:40:49.813075 systemd[1]: Started cri-containerd-4ec5fe64f20dcdbfca2b7dc54524646e3c9a7502e8c8be4564b0ffcc66318081.scope - libcontainer container 4ec5fe64f20dcdbfca2b7dc54524646e3c9a7502e8c8be4564b0ffcc66318081. Dec 12 17:40:49.817842 systemd[1]: Started cri-containerd-40cf3f5a6af21080c8941d2e1f62df103ff91ca7f6c2bb8f123f9ea6e0ba999f.scope - libcontainer container 40cf3f5a6af21080c8941d2e1f62df103ff91ca7f6c2bb8f123f9ea6e0ba999f. Dec 12 17:40:49.836067 systemd[1]: Started cri-containerd-0a098a367a8ed27476f3149c6adc5965a07bbc88572faf857112e3c48190e878.scope - libcontainer container 0a098a367a8ed27476f3149c6adc5965a07bbc88572faf857112e3c48190e878. Dec 12 17:40:49.836000 audit: BPF prog-id=81 op=LOAD Dec 12 17:40:49.837000 audit: BPF prog-id=82 op=LOAD Dec 12 17:40:49.837000 audit: BPF prog-id=83 op=LOAD Dec 12 17:40:49.837000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.837000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:40:49.837000 audit[2442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.837000 audit: BPF prog-id=84 op=LOAD Dec 12 17:40:49.837000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.838000 audit: BPF prog-id=85 op=LOAD Dec 12 17:40:49.838000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.838000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:40:49.838000 audit[2442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.838000 audit: BPF prog-id=86 op=LOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.838000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.838000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:40:49.838000 audit[2442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.838000 audit: BPF prog-id=87 op=LOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.838000 audit: BPF prog-id=88 op=LOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.838000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.838000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.838000 audit: BPF prog-id=89 op=LOAD Dec 12 17:40:49.838000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2414 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465633566653634663230646364626663613262376463353435323436 Dec 12 17:40:49.838000 audit: BPF prog-id=90 op=LOAD Dec 12 17:40:49.838000 audit[2463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2439 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430636633663561366166323130383063383934316432653166363264 Dec 12 17:40:49.841961 kubelet[2361]: E1212 17:40:49.840021 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:40:49.846000 audit: BPF prog-id=91 op=LOAD Dec 12 17:40:49.847000 audit: BPF prog-id=92 op=LOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.847000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.847000 audit: BPF prog-id=93 op=LOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.847000 audit: BPF prog-id=94 op=LOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.847000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.847000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.847000 audit: BPF prog-id=95 op=LOAD Dec 12 17:40:49.847000 audit[2481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2440 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303938613336376138656432373437366633313439633661646335 Dec 12 17:40:49.871707 containerd[1582]: time="2025-12-12T17:40:49.871661048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13618bc4b2f1ece5807e228b3b9a3a9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ec5fe64f20dcdbfca2b7dc54524646e3c9a7502e8c8be4564b0ffcc66318081\"" Dec 12 17:40:49.873155 kubelet[2361]: E1212 17:40:49.873123 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:49.876843 containerd[1582]: time="2025-12-12T17:40:49.876659767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"40cf3f5a6af21080c8941d2e1f62df103ff91ca7f6c2bb8f123f9ea6e0ba999f\"" Dec 12 17:40:49.878073 kubelet[2361]: E1212 17:40:49.878039 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:49.878180 containerd[1582]: time="2025-12-12T17:40:49.878041896Z" level=info msg="CreateContainer within sandbox \"4ec5fe64f20dcdbfca2b7dc54524646e3c9a7502e8c8be4564b0ffcc66318081\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:40:49.882191 containerd[1582]: time="2025-12-12T17:40:49.882150833Z" level=info msg="CreateContainer within sandbox \"40cf3f5a6af21080c8941d2e1f62df103ff91ca7f6c2bb8f123f9ea6e0ba999f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:40:49.888696 containerd[1582]: time="2025-12-12T17:40:49.888646638Z" level=info msg="Container f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:40:49.899175 containerd[1582]: time="2025-12-12T17:40:49.899137746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a098a367a8ed27476f3149c6adc5965a07bbc88572faf857112e3c48190e878\"" Dec 12 17:40:49.900328 kubelet[2361]: E1212 17:40:49.900302 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:49.903458 containerd[1582]: time="2025-12-12T17:40:49.903425312Z" level=info msg="CreateContainer within sandbox \"0a098a367a8ed27476f3149c6adc5965a07bbc88572faf857112e3c48190e878\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:40:49.904692 containerd[1582]: time="2025-12-12T17:40:49.904637993Z" level=info msg="Container eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:40:49.907357 containerd[1582]: time="2025-12-12T17:40:49.907053812Z" level=info msg="CreateContainer within sandbox \"40cf3f5a6af21080c8941d2e1f62df103ff91ca7f6c2bb8f123f9ea6e0ba999f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc\"" Dec 12 17:40:49.907660 containerd[1582]: time="2025-12-12T17:40:49.907636415Z" level=info msg="StartContainer for \"f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc\"" Dec 12 17:40:49.909104 containerd[1582]: time="2025-12-12T17:40:49.909071271Z" level=info msg="connecting to shim f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc" address="unix:///run/containerd/s/a8a52813c0df3db695a44217f789858299eba2a5e4684fd43293a50d4eda678b" protocol=ttrpc version=3 Dec 12 17:40:49.914098 containerd[1582]: time="2025-12-12T17:40:49.914008442Z" level=info msg="CreateContainer within sandbox \"4ec5fe64f20dcdbfca2b7dc54524646e3c9a7502e8c8be4564b0ffcc66318081\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47\"" Dec 12 17:40:49.914831 containerd[1582]: time="2025-12-12T17:40:49.914796661Z" level=info msg="StartContainer for \"eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47\"" Dec 12 17:40:49.915571 containerd[1582]: time="2025-12-12T17:40:49.915521887Z" level=info msg="Container 221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:40:49.915986 containerd[1582]: time="2025-12-12T17:40:49.915961065Z" level=info msg="connecting to shim eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47" address="unix:///run/containerd/s/8692c16f4e09927e01c0b24701dc61ffb5bb2823d83eb10c86a0f7ee14e22032" protocol=ttrpc version=3 Dec 12 17:40:49.924367 containerd[1582]: time="2025-12-12T17:40:49.924297544Z" level=info msg="CreateContainer within sandbox \"0a098a367a8ed27476f3149c6adc5965a07bbc88572faf857112e3c48190e878\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3\"" Dec 12 17:40:49.925423 containerd[1582]: time="2025-12-12T17:40:49.925221248Z" level=info msg="StartContainer for \"221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3\"" Dec 12 17:40:49.926543 containerd[1582]: time="2025-12-12T17:40:49.926507867Z" level=info msg="connecting to shim 221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3" address="unix:///run/containerd/s/de77886374f0992b3c6a30e0448e01bb2f8e4e80441b6a70bec50218b7a938cc" protocol=ttrpc version=3 Dec 12 17:40:49.935027 systemd[1]: Started cri-containerd-f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc.scope - libcontainer container f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc. Dec 12 17:40:49.947050 systemd[1]: Started cri-containerd-eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47.scope - libcontainer container eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47. Dec 12 17:40:49.950475 systemd[1]: Started cri-containerd-221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3.scope - libcontainer container 221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3. Dec 12 17:40:49.956000 audit: BPF prog-id=96 op=LOAD Dec 12 17:40:49.958000 audit: BPF prog-id=97 op=LOAD Dec 12 17:40:49.958000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.958000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:40:49.958000 audit[2542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.958000 audit: BPF prog-id=98 op=LOAD Dec 12 17:40:49.958000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.958000 audit: BPF prog-id=99 op=LOAD Dec 12 17:40:49.958000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.959000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:40:49.959000 audit[2542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.959000 audit: BPF prog-id=98 op=UNLOAD Dec 12 17:40:49.959000 audit[2542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.959000 audit: BPF prog-id=100 op=LOAD Dec 12 17:40:49.959000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2439 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632353837306363373933386634626564666437386538666331366537 Dec 12 17:40:49.962333 kubelet[2361]: E1212 17:40:49.962300 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:40:49.962000 audit: BPF prog-id=101 op=LOAD Dec 12 17:40:49.963000 audit: BPF prog-id=102 op=LOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.963000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.963000 audit: BPF prog-id=103 op=LOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.963000 audit: BPF prog-id=104 op=LOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.963000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.963000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.963000 audit: BPF prog-id=105 op=LOAD Dec 12 17:40:49.963000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2414 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561666638346263346163303765646133396262656636346636366138 Dec 12 17:40:49.966000 audit: BPF prog-id=106 op=LOAD Dec 12 17:40:49.967000 audit: BPF prog-id=107 op=LOAD Dec 12 17:40:49.967000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.967000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:40:49.967000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.968000 audit: BPF prog-id=108 op=LOAD Dec 12 17:40:49.968000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.968000 audit: BPF prog-id=109 op=LOAD Dec 12 17:40:49.968000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.968000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:40:49.968000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.969000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:40:49.969000 audit[2567]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.969000 audit: BPF prog-id=110 op=LOAD Dec 12 17:40:49.969000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2440 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:40:49.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313535363435303132356263353065313336316230646134316666 Dec 12 17:40:49.993506 containerd[1582]: time="2025-12-12T17:40:49.993128083Z" level=info msg="StartContainer for \"f25870cc7938f4bedfd78e8fc16e75a6abd4e022fddac085a5da2fd3510a11cc\" returns successfully" Dec 12 17:40:50.002711 containerd[1582]: time="2025-12-12T17:40:50.002633587Z" level=info msg="StartContainer for \"eaff84bc4ac07eda39bbef64f66a8d5bc029f87add7e3beca3d3c814c9472a47\" returns successfully" Dec 12 17:40:50.010723 containerd[1582]: time="2025-12-12T17:40:50.010682195Z" level=info msg="StartContainer for \"221556450125bc50e1361b0da41ffad8bcfc1be73e3e03df8f8031e31031a5a3\" returns successfully" Dec 12 17:40:50.360538 kubelet[2361]: I1212 17:40:50.360509 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:40:50.843284 kubelet[2361]: E1212 17:40:50.843168 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:50.843575 kubelet[2361]: E1212 17:40:50.843546 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:50.843810 kubelet[2361]: E1212 17:40:50.843680 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:50.843924 kubelet[2361]: E1212 17:40:50.843902 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:50.847253 kubelet[2361]: E1212 17:40:50.847230 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:50.847374 kubelet[2361]: E1212 17:40:50.847354 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:51.850178 kubelet[2361]: E1212 17:40:51.850136 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:51.850496 kubelet[2361]: E1212 17:40:51.850271 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:51.851532 kubelet[2361]: E1212 17:40:51.851504 2361 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:40:51.851663 kubelet[2361]: E1212 17:40:51.851642 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:51.887026 kubelet[2361]: E1212 17:40:51.886983 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:40:51.944588 kubelet[2361]: I1212 17:40:51.944537 2361 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:40:51.944588 kubelet[2361]: E1212 17:40:51.944581 2361 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:40:51.965690 kubelet[2361]: E1212 17:40:51.965643 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:40:52.066418 kubelet[2361]: E1212 17:40:52.066367 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:40:52.167103 kubelet[2361]: E1212 17:40:52.166971 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:40:52.267791 kubelet[2361]: E1212 17:40:52.267741 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:40:52.403119 kubelet[2361]: I1212 17:40:52.402517 2361 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:52.409266 kubelet[2361]: E1212 17:40:52.409221 2361 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:52.409266 kubelet[2361]: I1212 17:40:52.409258 2361 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:52.411437 kubelet[2361]: E1212 17:40:52.411412 2361 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:52.411437 kubelet[2361]: I1212 17:40:52.411438 2361 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:52.413095 kubelet[2361]: E1212 17:40:52.413066 2361 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:52.793155 kubelet[2361]: I1212 17:40:52.792858 2361 apiserver.go:52] "Watching apiserver" Dec 12 17:40:52.803039 kubelet[2361]: I1212 17:40:52.802880 2361 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:40:54.134356 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-7.scope)... Dec 12 17:40:54.134377 systemd[1]: Reloading... Dec 12 17:40:54.213914 zram_generator::config[2698]: No configuration found. Dec 12 17:40:54.377508 kubelet[2361]: I1212 17:40:54.377431 2361 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:54.385677 kubelet[2361]: E1212 17:40:54.385314 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:54.464273 systemd[1]: Reloading finished in 329 ms. Dec 12 17:40:54.498146 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:54.513869 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:40:54.514208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:54.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:54.514281 systemd[1]: kubelet.service: Consumed 892ms CPU time, 120.9M memory peak. Dec 12 17:40:54.514945 kernel: kauditd_printk_skb: 203 callbacks suppressed Dec 12 17:40:54.515002 kernel: audit: type=1131 audit(1765561254.512:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:54.516000 audit: BPF prog-id=111 op=LOAD Dec 12 17:40:54.516000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:40:54.516922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:54.519655 kernel: audit: type=1334 audit(1765561254.516:385): prog-id=111 op=LOAD Dec 12 17:40:54.519762 kernel: audit: type=1334 audit(1765561254.516:386): prog-id=71 op=UNLOAD Dec 12 17:40:54.516000 audit: BPF prog-id=112 op=LOAD Dec 12 17:40:54.518000 audit: BPF prog-id=113 op=LOAD Dec 12 17:40:54.520862 kernel: audit: type=1334 audit(1765561254.516:387): prog-id=112 op=LOAD Dec 12 17:40:54.518000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:40:54.522702 kernel: audit: type=1334 audit(1765561254.518:388): prog-id=113 op=LOAD Dec 12 17:40:54.522749 kernel: audit: type=1334 audit(1765561254.518:389): prog-id=72 op=UNLOAD Dec 12 17:40:54.518000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:40:54.519000 audit: BPF prog-id=114 op=LOAD Dec 12 17:40:54.524588 kernel: audit: type=1334 audit(1765561254.518:390): prog-id=73 op=UNLOAD Dec 12 17:40:54.524647 kernel: audit: type=1334 audit(1765561254.519:391): prog-id=114 op=LOAD Dec 12 17:40:54.519000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:40:54.525715 kernel: audit: type=1334 audit(1765561254.519:392): prog-id=79 op=UNLOAD Dec 12 17:40:54.519000 audit: BPF prog-id=115 op=LOAD Dec 12 17:40:54.526817 kernel: audit: type=1334 audit(1765561254.519:393): prog-id=115 op=LOAD Dec 12 17:40:54.519000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:40:54.521000 audit: BPF prog-id=116 op=LOAD Dec 12 17:40:54.529000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:40:54.529000 audit: BPF prog-id=117 op=LOAD Dec 12 17:40:54.529000 audit: BPF prog-id=118 op=LOAD Dec 12 17:40:54.529000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:40:54.529000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:40:54.529000 audit: BPF prog-id=119 op=LOAD Dec 12 17:40:54.529000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:40:54.529000 audit: BPF prog-id=120 op=LOAD Dec 12 17:40:54.529000 audit: BPF prog-id=121 op=LOAD Dec 12 17:40:54.529000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:40:54.530000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:40:54.530000 audit: BPF prog-id=122 op=LOAD Dec 12 17:40:54.530000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:40:54.530000 audit: BPF prog-id=123 op=LOAD Dec 12 17:40:54.530000 audit: BPF prog-id=124 op=LOAD Dec 12 17:40:54.530000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:40:54.530000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:40:54.530000 audit: BPF prog-id=125 op=LOAD Dec 12 17:40:54.531000 audit: BPF prog-id=126 op=LOAD Dec 12 17:40:54.531000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:40:54.531000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:40:54.532000 audit: BPF prog-id=127 op=LOAD Dec 12 17:40:54.532000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:40:54.532000 audit: BPF prog-id=128 op=LOAD Dec 12 17:40:54.532000 audit: BPF prog-id=129 op=LOAD Dec 12 17:40:54.532000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:40:54.532000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:40:54.533000 audit: BPF prog-id=130 op=LOAD Dec 12 17:40:54.533000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:40:54.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:40:54.706107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:54.724544 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:40:54.772365 kubelet[2737]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:40:54.772365 kubelet[2737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:40:54.772718 kubelet[2737]: I1212 17:40:54.772408 2737 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:40:54.779839 kubelet[2737]: I1212 17:40:54.779783 2737 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:40:54.779839 kubelet[2737]: I1212 17:40:54.779819 2737 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:40:54.779839 kubelet[2737]: I1212 17:40:54.779854 2737 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:40:54.780025 kubelet[2737]: I1212 17:40:54.779861 2737 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:40:54.780356 kubelet[2737]: I1212 17:40:54.780340 2737 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:40:54.782841 kubelet[2737]: I1212 17:40:54.782024 2737 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:40:54.784278 kubelet[2737]: I1212 17:40:54.784242 2737 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:40:54.788518 kubelet[2737]: I1212 17:40:54.788480 2737 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:40:54.791263 kubelet[2737]: I1212 17:40:54.791225 2737 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:40:54.791509 kubelet[2737]: I1212 17:40:54.791464 2737 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:40:54.791673 kubelet[2737]: I1212 17:40:54.791495 2737 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:40:54.791673 kubelet[2737]: I1212 17:40:54.791661 2737 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:40:54.791673 kubelet[2737]: I1212 17:40:54.791669 2737 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:40:54.791793 kubelet[2737]: I1212 17:40:54.791695 2737 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:40:54.792549 kubelet[2737]: I1212 17:40:54.792505 2737 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:40:54.792727 kubelet[2737]: I1212 17:40:54.792700 2737 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:40:54.792727 kubelet[2737]: I1212 17:40:54.792727 2737 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:40:54.793294 kubelet[2737]: I1212 17:40:54.793231 2737 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:40:54.793294 kubelet[2737]: I1212 17:40:54.793258 2737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:40:54.800932 kubelet[2737]: I1212 17:40:54.800885 2737 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:40:54.801524 kubelet[2737]: I1212 17:40:54.801503 2737 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:40:54.801590 kubelet[2737]: I1212 17:40:54.801536 2737 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:40:54.803748 kubelet[2737]: I1212 17:40:54.803722 2737 server.go:1262] "Started kubelet" Dec 12 17:40:54.805277 kubelet[2737]: I1212 17:40:54.805237 2737 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:40:54.805442 kubelet[2737]: I1212 17:40:54.805397 2737 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:40:54.808315 kubelet[2737]: I1212 17:40:54.808285 2737 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:40:54.808542 kubelet[2737]: I1212 17:40:54.808526 2737 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:40:54.809407 kubelet[2737]: I1212 17:40:54.809373 2737 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:40:54.809461 kubelet[2737]: I1212 17:40:54.809413 2737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:40:54.811006 kubelet[2737]: I1212 17:40:54.810981 2737 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:40:54.811169 kubelet[2737]: I1212 17:40:54.811154 2737 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:40:54.811412 kubelet[2737]: I1212 17:40:54.811397 2737 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:40:54.812046 kubelet[2737]: E1212 17:40:54.812019 2737 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:40:54.812084 kubelet[2737]: I1212 17:40:54.812074 2737 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:40:54.817134 kubelet[2737]: I1212 17:40:54.817101 2737 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:40:54.817256 kubelet[2737]: I1212 17:40:54.817232 2737 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:40:54.822849 kubelet[2737]: I1212 17:40:54.821772 2737 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:40:54.826521 kubelet[2737]: E1212 17:40:54.826487 2737 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:40:54.837708 kubelet[2737]: I1212 17:40:54.837669 2737 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:40:54.839530 kubelet[2737]: I1212 17:40:54.839502 2737 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:40:54.839530 kubelet[2737]: I1212 17:40:54.839529 2737 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:40:54.839530 kubelet[2737]: I1212 17:40:54.839551 2737 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:40:54.839958 kubelet[2737]: E1212 17:40:54.839602 2737 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:40:54.862862 kubelet[2737]: I1212 17:40:54.862823 2737 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:40:54.862862 kubelet[2737]: I1212 17:40:54.862845 2737 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:40:54.862862 kubelet[2737]: I1212 17:40:54.862867 2737 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:40:54.863036 kubelet[2737]: I1212 17:40:54.863004 2737 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:40:54.863036 kubelet[2737]: I1212 17:40:54.863013 2737 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:40:54.863036 kubelet[2737]: I1212 17:40:54.863029 2737 policy_none.go:49] "None policy: Start" Dec 12 17:40:54.863036 kubelet[2737]: I1212 17:40:54.863038 2737 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:40:54.863121 kubelet[2737]: I1212 17:40:54.863047 2737 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:40:54.863306 kubelet[2737]: I1212 17:40:54.863150 2737 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:40:54.863306 kubelet[2737]: I1212 17:40:54.863165 2737 policy_none.go:47] "Start" Dec 12 17:40:54.867671 kubelet[2737]: E1212 17:40:54.867639 2737 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:40:54.867973 kubelet[2737]: I1212 17:40:54.867933 2737 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:40:54.867973 kubelet[2737]: I1212 17:40:54.867947 2737 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:40:54.868296 kubelet[2737]: I1212 17:40:54.868270 2737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:40:54.869700 kubelet[2737]: E1212 17:40:54.869664 2737 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:40:54.941024 kubelet[2737]: I1212 17:40:54.940985 2737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:54.941176 kubelet[2737]: I1212 17:40:54.941045 2737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:54.941176 kubelet[2737]: I1212 17:40:54.941127 2737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:54.949269 kubelet[2737]: E1212 17:40:54.949128 2737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:54.972060 kubelet[2737]: I1212 17:40:54.971952 2737 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:40:54.983123 kubelet[2737]: I1212 17:40:54.983091 2737 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:40:54.983277 kubelet[2737]: I1212 17:40:54.983193 2737 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:40:55.113105 kubelet[2737]: I1212 17:40:55.113037 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:55.113244 kubelet[2737]: I1212 17:40:55.113141 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:55.113244 kubelet[2737]: I1212 17:40:55.113165 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:55.113244 kubelet[2737]: I1212 17:40:55.113181 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:55.113244 kubelet[2737]: I1212 17:40:55.113210 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:55.113244 kubelet[2737]: I1212 17:40:55.113225 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13618bc4b2f1ece5807e228b3b9a3a9e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13618bc4b2f1ece5807e228b3b9a3a9e\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:55.113348 kubelet[2737]: I1212 17:40:55.113246 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:40:55.113348 kubelet[2737]: I1212 17:40:55.113261 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13618bc4b2f1ece5807e228b3b9a3a9e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13618bc4b2f1ece5807e228b3b9a3a9e\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:55.113348 kubelet[2737]: I1212 17:40:55.113275 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13618bc4b2f1ece5807e228b3b9a3a9e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13618bc4b2f1ece5807e228b3b9a3a9e\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:55.247819 kubelet[2737]: E1212 17:40:55.247680 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:55.250306 kubelet[2737]: E1212 17:40:55.250193 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:55.250306 kubelet[2737]: E1212 17:40:55.250245 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:55.794005 kubelet[2737]: I1212 17:40:55.793952 2737 apiserver.go:52] "Watching apiserver" Dec 12 17:40:55.811343 kubelet[2737]: I1212 17:40:55.811279 2737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:40:55.856278 kubelet[2737]: I1212 17:40:55.856086 2737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:55.856278 kubelet[2737]: I1212 17:40:55.856266 2737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:55.856451 kubelet[2737]: E1212 17:40:55.856328 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:55.863435 kubelet[2737]: E1212 17:40:55.863347 2737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:40:55.863435 kubelet[2737]: E1212 17:40:55.863383 2737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:40:55.863623 kubelet[2737]: E1212 17:40:55.863566 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:55.863647 kubelet[2737]: E1212 17:40:55.863631 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:55.883416 kubelet[2737]: I1212 17:40:55.883314 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.883295379 podStartE2EDuration="1.883295379s" podCreationTimestamp="2025-12-12 17:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:40:55.882928542 +0000 UTC m=+1.151891634" watchObservedRunningTime="2025-12-12 17:40:55.883295379 +0000 UTC m=+1.152258471" Dec 12 17:40:55.912831 kubelet[2737]: I1212 17:40:55.912541 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.912523574 podStartE2EDuration="1.912523574s" podCreationTimestamp="2025-12-12 17:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:40:55.897200749 +0000 UTC m=+1.166163841" watchObservedRunningTime="2025-12-12 17:40:55.912523574 +0000 UTC m=+1.181486666" Dec 12 17:40:55.923985 kubelet[2737]: I1212 17:40:55.923910 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9238928 podStartE2EDuration="1.9238928s" podCreationTimestamp="2025-12-12 17:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:40:55.912591328 +0000 UTC m=+1.181554420" watchObservedRunningTime="2025-12-12 17:40:55.9238928 +0000 UTC m=+1.192856052" Dec 12 17:40:56.857565 kubelet[2737]: E1212 17:40:56.857412 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:56.857565 kubelet[2737]: E1212 17:40:56.857496 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:40:59.623615 kubelet[2737]: I1212 17:40:59.623582 2737 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:40:59.624133 containerd[1582]: time="2025-12-12T17:40:59.624099695Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:40:59.625812 kubelet[2737]: I1212 17:40:59.625781 2737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:41:00.788099 systemd[1]: Created slice kubepods-besteffort-pod8ef67315_7fb2_4775_a3dc_be1f035ab3fe.slice - libcontainer container kubepods-besteffort-pod8ef67315_7fb2_4775_a3dc_be1f035ab3fe.slice. Dec 12 17:41:00.845262 kubelet[2737]: I1212 17:41:00.845206 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ef67315-7fb2-4775-a3dc-be1f035ab3fe-xtables-lock\") pod \"kube-proxy-x8lqk\" (UID: \"8ef67315-7fb2-4775-a3dc-be1f035ab3fe\") " pod="kube-system/kube-proxy-x8lqk" Dec 12 17:41:00.845262 kubelet[2737]: I1212 17:41:00.845246 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ef67315-7fb2-4775-a3dc-be1f035ab3fe-kube-proxy\") pod \"kube-proxy-x8lqk\" (UID: \"8ef67315-7fb2-4775-a3dc-be1f035ab3fe\") " pod="kube-system/kube-proxy-x8lqk" Dec 12 17:41:00.845262 kubelet[2737]: I1212 17:41:00.845265 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ef67315-7fb2-4775-a3dc-be1f035ab3fe-lib-modules\") pod \"kube-proxy-x8lqk\" (UID: \"8ef67315-7fb2-4775-a3dc-be1f035ab3fe\") " pod="kube-system/kube-proxy-x8lqk" Dec 12 17:41:00.845640 kubelet[2737]: I1212 17:41:00.845282 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4p6\" (UniqueName: \"kubernetes.io/projected/8ef67315-7fb2-4775-a3dc-be1f035ab3fe-kube-api-access-hn4p6\") pod \"kube-proxy-x8lqk\" (UID: \"8ef67315-7fb2-4775-a3dc-be1f035ab3fe\") " pod="kube-system/kube-proxy-x8lqk" Dec 12 17:41:00.916858 systemd[1]: Created slice kubepods-besteffort-pod23ab04fa_c888_4b65_928e_8562d48584e0.slice - libcontainer container kubepods-besteffort-pod23ab04fa_c888_4b65_928e_8562d48584e0.slice. Dec 12 17:41:00.946259 kubelet[2737]: I1212 17:41:00.946216 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg2d\" (UniqueName: \"kubernetes.io/projected/23ab04fa-c888-4b65-928e-8562d48584e0-kube-api-access-5mg2d\") pod \"tigera-operator-65cdcdfd6d-hrncw\" (UID: \"23ab04fa-c888-4b65-928e-8562d48584e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hrncw" Dec 12 17:41:00.946396 kubelet[2737]: I1212 17:41:00.946288 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23ab04fa-c888-4b65-928e-8562d48584e0-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hrncw\" (UID: \"23ab04fa-c888-4b65-928e-8562d48584e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hrncw" Dec 12 17:41:01.101768 kubelet[2737]: E1212 17:41:01.101670 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:01.103371 containerd[1582]: time="2025-12-12T17:41:01.103272900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8lqk,Uid:8ef67315-7fb2-4775-a3dc-be1f035ab3fe,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:01.121601 containerd[1582]: time="2025-12-12T17:41:01.121137362Z" level=info msg="connecting to shim e0766a215c8cf3a5201b302004f0ee885ba994418e0374dbf7fe5d4658ad1234" address="unix:///run/containerd/s/1695c3a8d98944c116ed5757f8cacbc6a00bee78bb1d3d6d8d33ecb52b5d338f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:01.155038 systemd[1]: Started cri-containerd-e0766a215c8cf3a5201b302004f0ee885ba994418e0374dbf7fe5d4658ad1234.scope - libcontainer container e0766a215c8cf3a5201b302004f0ee885ba994418e0374dbf7fe5d4658ad1234. Dec 12 17:41:01.163000 audit: BPF prog-id=131 op=LOAD Dec 12 17:41:01.165351 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:41:01.165416 kernel: audit: type=1334 audit(1765561261.163:426): prog-id=131 op=LOAD Dec 12 17:41:01.164000 audit: BPF prog-id=132 op=LOAD Dec 12 17:41:01.164000 audit[2816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.169924 kernel: audit: type=1334 audit(1765561261.164:427): prog-id=132 op=LOAD Dec 12 17:41:01.169976 kernel: audit: type=1300 audit(1765561261.164:427): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.170000 kernel: audit: type=1327 audit(1765561261.164:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.164000 audit: BPF prog-id=132 op=UNLOAD Dec 12 17:41:01.173835 kernel: audit: type=1334 audit(1765561261.164:428): prog-id=132 op=UNLOAD Dec 12 17:41:01.173882 kernel: audit: type=1300 audit(1765561261.164:428): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.164000 audit[2816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.180046 kernel: audit: type=1327 audit(1765561261.164:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.180142 kernel: audit: type=1334 audit(1765561261.164:429): prog-id=133 op=LOAD Dec 12 17:41:01.164000 audit: BPF prog-id=133 op=LOAD Dec 12 17:41:01.164000 audit[2816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.184284 kernel: audit: type=1300 audit(1765561261.164:429): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.188137 kernel: audit: type=1327 audit(1765561261.164:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.165000 audit: BPF prog-id=134 op=LOAD Dec 12 17:41:01.165000 audit[2816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.165000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:41:01.165000 audit[2816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.165000 audit: BPF prog-id=133 op=UNLOAD Dec 12 17:41:01.165000 audit[2816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.165000 audit: BPF prog-id=135 op=LOAD Dec 12 17:41:01.165000 audit[2816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2803 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373636613231356338636633613532303162333032303034663065 Dec 12 17:41:01.202716 containerd[1582]: time="2025-12-12T17:41:01.202672309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8lqk,Uid:8ef67315-7fb2-4775-a3dc-be1f035ab3fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0766a215c8cf3a5201b302004f0ee885ba994418e0374dbf7fe5d4658ad1234\"" Dec 12 17:41:01.203605 kubelet[2737]: E1212 17:41:01.203540 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:01.209264 containerd[1582]: time="2025-12-12T17:41:01.209213596Z" level=info msg="CreateContainer within sandbox \"e0766a215c8cf3a5201b302004f0ee885ba994418e0374dbf7fe5d4658ad1234\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:41:01.220318 containerd[1582]: time="2025-12-12T17:41:01.220271862Z" level=info msg="Container a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:01.222010 containerd[1582]: time="2025-12-12T17:41:01.221980778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hrncw,Uid:23ab04fa-c888-4b65-928e-8562d48584e0,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:41:01.246689 containerd[1582]: time="2025-12-12T17:41:01.246596948Z" level=info msg="connecting to shim 55cd50dc00873880147049522a6ed2d36c5b0ae45991478d576390c6645fb946" address="unix:///run/containerd/s/ca8eb79991dc9e7d4c64c4c30cf55bc5a67341bc1c5bc63833d5114f6e33fa5a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:01.254902 containerd[1582]: time="2025-12-12T17:41:01.254275737Z" level=info msg="CreateContainer within sandbox \"e0766a215c8cf3a5201b302004f0ee885ba994418e0374dbf7fe5d4658ad1234\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134\"" Dec 12 17:41:01.255324 containerd[1582]: time="2025-12-12T17:41:01.255293296Z" level=info msg="StartContainer for \"a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134\"" Dec 12 17:41:01.257118 containerd[1582]: time="2025-12-12T17:41:01.257088165Z" level=info msg="connecting to shim a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134" address="unix:///run/containerd/s/1695c3a8d98944c116ed5757f8cacbc6a00bee78bb1d3d6d8d33ecb52b5d338f" protocol=ttrpc version=3 Dec 12 17:41:01.274023 systemd[1]: Started cri-containerd-55cd50dc00873880147049522a6ed2d36c5b0ae45991478d576390c6645fb946.scope - libcontainer container 55cd50dc00873880147049522a6ed2d36c5b0ae45991478d576390c6645fb946. Dec 12 17:41:01.278876 systemd[1]: Started cri-containerd-a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134.scope - libcontainer container a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134. Dec 12 17:41:01.285000 audit: BPF prog-id=136 op=LOAD Dec 12 17:41:01.285000 audit: BPF prog-id=137 op=LOAD Dec 12 17:41:01.285000 audit[2860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.285000 audit: BPF prog-id=137 op=UNLOAD Dec 12 17:41:01.285000 audit[2860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.285000 audit: BPF prog-id=138 op=LOAD Dec 12 17:41:01.285000 audit[2860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.286000 audit: BPF prog-id=139 op=LOAD Dec 12 17:41:01.286000 audit[2860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.286000 audit: BPF prog-id=139 op=UNLOAD Dec 12 17:41:01.286000 audit[2860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.286000 audit: BPF prog-id=138 op=UNLOAD Dec 12 17:41:01.286000 audit[2860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.286000 audit: BPF prog-id=140 op=LOAD Dec 12 17:41:01.286000 audit[2860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2849 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636435306463303038373338383031343730343935323261366564 Dec 12 17:41:01.309571 containerd[1582]: time="2025-12-12T17:41:01.309532679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hrncw,Uid:23ab04fa-c888-4b65-928e-8562d48584e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"55cd50dc00873880147049522a6ed2d36c5b0ae45991478d576390c6645fb946\"" Dec 12 17:41:01.312552 containerd[1582]: time="2025-12-12T17:41:01.311179221Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:41:01.330000 audit: BPF prog-id=141 op=LOAD Dec 12 17:41:01.330000 audit[2866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2803 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138373732316366396530393162626536636634623339616534613130 Dec 12 17:41:01.330000 audit: BPF prog-id=142 op=LOAD Dec 12 17:41:01.330000 audit[2866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2803 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138373732316366396530393162626536636634623339616534613130 Dec 12 17:41:01.330000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:41:01.330000 audit[2866]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2803 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138373732316366396530393162626536636634623339616534613130 Dec 12 17:41:01.330000 audit: BPF prog-id=141 op=UNLOAD Dec 12 17:41:01.330000 audit[2866]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2803 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138373732316366396530393162626536636634623339616534613130 Dec 12 17:41:01.330000 audit: BPF prog-id=143 op=LOAD Dec 12 17:41:01.330000 audit[2866]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2803 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138373732316366396530393162626536636634623339616534613130 Dec 12 17:41:01.346719 containerd[1582]: time="2025-12-12T17:41:01.346669258Z" level=info msg="StartContainer for \"a87721cf9e091bbe6cf4b39ae4a1037150a0a2e2ceb358a550eade5b3e378134\" returns successfully" Dec 12 17:41:01.574000 audit[2952]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.574000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd1e9f60 a2=0 a3=1 items=0 ppid=2891 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:41:01.574000 audit[2953]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.574000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe219bce0 a2=0 a3=1 items=0 ppid=2891 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:41:01.576000 audit[2954]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.576000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecf3f900 a2=0 a3=1 items=0 ppid=2891 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:41:01.578000 audit[2955]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.578000 audit[2955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb976a80 a2=0 a3=1 items=0 ppid=2891 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:41:01.580000 audit[2959]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.580000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd80f4770 a2=0 a3=1 items=0 ppid=2891 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:41:01.582000 audit[2961]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.582000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc07cc30 a2=0 a3=1 items=0 ppid=2891 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:41:01.680000 audit[2962]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.680000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffa0f6bf0 a2=0 a3=1 items=0 ppid=2891 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:41:01.683000 audit[2964]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.683000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc4be82f0 a2=0 a3=1 items=0 ppid=2891 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 12 17:41:01.686000 audit[2967]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.686000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff6e56060 a2=0 a3=1 items=0 ppid=2891 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 17:41:01.687000 audit[2968]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.687000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9de10b0 a2=0 a3=1 items=0 ppid=2891 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:41:01.690000 audit[2970]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.690000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe94eaf80 a2=0 a3=1 items=0 ppid=2891 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:41:01.691000 audit[2971]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.691000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6bcc7e0 a2=0 a3=1 items=0 ppid=2891 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:41:01.693000 audit[2973]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.693000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc63f1a40 a2=0 a3=1 items=0 ppid=2891 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.697000 audit[2976]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.697000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffce0923b0 a2=0 a3=1 items=0 ppid=2891 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.698000 audit[2977]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.698000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa4addb0 a2=0 a3=1 items=0 ppid=2891 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:41:01.701000 audit[2979]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.701000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd398d8c0 a2=0 a3=1 items=0 ppid=2891 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:41:01.702000 audit[2980]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.702000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb90b720 a2=0 a3=1 items=0 ppid=2891 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:41:01.705000 audit[2982]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.705000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe27deae0 a2=0 a3=1 items=0 ppid=2891 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 12 17:41:01.709000 audit[2985]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.709000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd7dca830 a2=0 a3=1 items=0 ppid=2891 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 17:41:01.712000 audit[2988]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.712000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb7ef5b0 a2=0 a3=1 items=0 ppid=2891 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 17:41:01.713000 audit[2989]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.713000 audit[2989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc80b2d90 a2=0 a3=1 items=0 ppid=2891 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:41:01.716000 audit[2991]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.716000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff9814030 a2=0 a3=1 items=0 ppid=2891 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.719000 audit[2994]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.719000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff720b570 a2=0 a3=1 items=0 ppid=2891 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.720000 audit[2995]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.720000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5dc8960 a2=0 a3=1 items=0 ppid=2891 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:41:01.723000 audit[2997]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:41:01.723000 audit[2997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffe4ff340 a2=0 a3=1 items=0 ppid=2891 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:41:01.741000 audit[3003]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:01.741000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcda52590 a2=0 a3=1 items=0 ppid=2891 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:01.749000 audit[3003]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:01.749000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcda52590 a2=0 a3=1 items=0 ppid=2891 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:01.751000 audit[3008]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.751000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcd778990 a2=0 a3=1 items=0 ppid=2891 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:41:01.753000 audit[3010]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.753000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdd6d9750 a2=0 a3=1 items=0 ppid=2891 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 17:41:01.757000 audit[3013]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.757000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc90825b0 a2=0 a3=1 items=0 ppid=2891 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 12 17:41:01.758000 audit[3014]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.758000 audit[3014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe51b0920 a2=0 a3=1 items=0 ppid=2891 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:41:01.760000 audit[3016]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.760000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc4fd3890 a2=0 a3=1 items=0 ppid=2891 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:41:01.761000 audit[3017]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.761000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd491a080 a2=0 a3=1 items=0 ppid=2891 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:41:01.763000 audit[3019]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.763000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffde8604f0 a2=0 a3=1 items=0 ppid=2891 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.767000 audit[3022]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.767000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd73328c0 a2=0 a3=1 items=0 ppid=2891 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.768000 audit[3023]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.768000 audit[3023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec1ceb30 a2=0 a3=1 items=0 ppid=2891 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:41:01.770000 audit[3025]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.770000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1c758b0 a2=0 a3=1 items=0 ppid=2891 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:41:01.771000 audit[3026]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.771000 audit[3026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff463eb90 a2=0 a3=1 items=0 ppid=2891 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:41:01.774000 audit[3028]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.774000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1a02130 a2=0 a3=1 items=0 ppid=2891 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 17:41:01.777000 audit[3031]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.777000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffbbd27a0 a2=0 a3=1 items=0 ppid=2891 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 17:41:01.781000 audit[3034]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.781000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdc2c61e0 a2=0 a3=1 items=0 ppid=2891 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 12 17:41:01.782000 audit[3035]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.782000 audit[3035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd2e29ab0 a2=0 a3=1 items=0 ppid=2891 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:41:01.784000 audit[3037]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.784000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffeb951eb0 a2=0 a3=1 items=0 ppid=2891 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.787000 audit[3040]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.787000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4b19ac0 a2=0 a3=1 items=0 ppid=2891 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:41:01.788000 audit[3041]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.788000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe26fa780 a2=0 a3=1 items=0 ppid=2891 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:41:01.790000 audit[3043]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.790000 audit[3043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe3d8b900 a2=0 a3=1 items=0 ppid=2891 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:41:01.791000 audit[3044]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.791000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4404140 a2=0 a3=1 items=0 ppid=2891 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:41:01.794000 audit[3046]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.794000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd7e38870 a2=0 a3=1 items=0 ppid=2891 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:41:01.797000 audit[3049]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:41:01.797000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc4d1af20 a2=0 a3=1 items=0 ppid=2891 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:41:01.800000 audit[3051]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:41:01.800000 audit[3051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffce74f0b0 a2=0 a3=1 items=0 ppid=2891 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.800000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:01.800000 audit[3051]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:41:01.800000 audit[3051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffce74f0b0 a2=0 a3=1 items=0 ppid=2891 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:01.800000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:01.868137 kubelet[2737]: E1212 17:41:01.868020 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:01.879322 kubelet[2737]: I1212 17:41:01.879237 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x8lqk" podStartSLOduration=1.879221318 podStartE2EDuration="1.879221318s" podCreationTimestamp="2025-12-12 17:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:01.879034517 +0000 UTC m=+7.147997609" watchObservedRunningTime="2025-12-12 17:41:01.879221318 +0000 UTC m=+7.148184410" Dec 12 17:41:02.418287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439328565.mount: Deactivated successfully. Dec 12 17:41:03.092905 kubelet[2737]: E1212 17:41:03.092822 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:03.725557 containerd[1582]: time="2025-12-12T17:41:03.725074616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:03.725910 containerd[1582]: time="2025-12-12T17:41:03.725835323Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22143261" Dec 12 17:41:03.726535 containerd[1582]: time="2025-12-12T17:41:03.726507922Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:03.728494 containerd[1582]: time="2025-12-12T17:41:03.728455666Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:03.730398 containerd[1582]: time="2025-12-12T17:41:03.730365260Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.419155774s" Dec 12 17:41:03.730492 containerd[1582]: time="2025-12-12T17:41:03.730469620Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:41:03.737161 containerd[1582]: time="2025-12-12T17:41:03.737129080Z" level=info msg="CreateContainer within sandbox \"55cd50dc00873880147049522a6ed2d36c5b0ae45991478d576390c6645fb946\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:41:03.745441 containerd[1582]: time="2025-12-12T17:41:03.745386573Z" level=info msg="Container 49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:03.747929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932301309.mount: Deactivated successfully. Dec 12 17:41:03.751525 containerd[1582]: time="2025-12-12T17:41:03.751489284Z" level=info msg="CreateContainer within sandbox \"55cd50dc00873880147049522a6ed2d36c5b0ae45991478d576390c6645fb946\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c\"" Dec 12 17:41:03.752120 containerd[1582]: time="2025-12-12T17:41:03.752094911Z" level=info msg="StartContainer for \"49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c\"" Dec 12 17:41:03.753745 containerd[1582]: time="2025-12-12T17:41:03.753680695Z" level=info msg="connecting to shim 49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c" address="unix:///run/containerd/s/ca8eb79991dc9e7d4c64c4c30cf55bc5a67341bc1c5bc63833d5114f6e33fa5a" protocol=ttrpc version=3 Dec 12 17:41:03.798003 systemd[1]: Started cri-containerd-49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c.scope - libcontainer container 49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c. Dec 12 17:41:03.807000 audit: BPF prog-id=144 op=LOAD Dec 12 17:41:03.807000 audit: BPF prog-id=145 op=LOAD Dec 12 17:41:03.807000 audit[3060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.807000 audit: BPF prog-id=145 op=UNLOAD Dec 12 17:41:03.807000 audit[3060]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.807000 audit: BPF prog-id=146 op=LOAD Dec 12 17:41:03.807000 audit[3060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.808000 audit: BPF prog-id=147 op=LOAD Dec 12 17:41:03.808000 audit[3060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.808000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:41:03.808000 audit[3060]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.808000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:41:03.808000 audit[3060]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.808000 audit: BPF prog-id=148 op=LOAD Dec 12 17:41:03.808000 audit[3060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2849 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:03.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613263323335373739653733363565636263633136663034666539 Dec 12 17:41:03.831643 containerd[1582]: time="2025-12-12T17:41:03.831607321Z" level=info msg="StartContainer for \"49a2c235779e7365ecbcc16f04fe9339ce95bf7c163f92dc2c1be9a6390ebc4c\" returns successfully" Dec 12 17:41:03.878370 kubelet[2737]: E1212 17:41:03.878316 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:03.889835 kubelet[2737]: I1212 17:41:03.889559 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hrncw" podStartSLOduration=1.467591098 podStartE2EDuration="3.889542957s" podCreationTimestamp="2025-12-12 17:41:00 +0000 UTC" firstStartedPulling="2025-12-12 17:41:01.310682752 +0000 UTC m=+6.579645844" lastFinishedPulling="2025-12-12 17:41:03.732634611 +0000 UTC m=+9.001597703" observedRunningTime="2025-12-12 17:41:03.889012948 +0000 UTC m=+9.157976040" watchObservedRunningTime="2025-12-12 17:41:03.889542957 +0000 UTC m=+9.158506049" Dec 12 17:41:03.952201 kubelet[2737]: E1212 17:41:03.952150 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:04.879974 kubelet[2737]: E1212 17:41:04.879930 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:05.543258 kubelet[2737]: E1212 17:41:05.543209 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:05.882628 kubelet[2737]: E1212 17:41:05.882598 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:09.205410 sudo[1794]: pam_unix(sudo:session): session closed for user root Dec 12 17:41:09.203000 audit[1794]: USER_END pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:41:09.206288 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:41:09.206344 kernel: audit: type=1106 audit(1765561269.203:506): pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:41:09.209429 sshd[1793]: Connection closed by 10.0.0.1 port 53546 Dec 12 17:41:09.203000 audit[1794]: CRED_DISP pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:41:09.212650 kernel: audit: type=1104 audit(1765561269.203:507): pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:41:09.212793 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:09.214000 audit[1790]: USER_END pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:09.218000 audit[1790]: CRED_DISP pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:09.222950 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:53546.service: Deactivated successfully. Dec 12 17:41:09.225018 kernel: audit: type=1106 audit(1765561269.214:508): pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:09.225088 kernel: audit: type=1104 audit(1765561269.218:509): pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:09.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.131:22-10.0.0.1:53546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:09.226114 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:41:09.228720 kernel: audit: type=1131 audit(1765561269.221:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.131:22-10.0.0.1:53546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:09.227860 systemd[1]: session-7.scope: Consumed 7.053s CPU time, 211.7M memory peak. Dec 12 17:41:09.229773 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:41:09.230636 systemd-logind[1556]: Removed session 7. Dec 12 17:41:10.667000 audit[3156]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:10.667000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff43a1110 a2=0 a3=1 items=0 ppid=2891 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:10.674836 kernel: audit: type=1325 audit(1765561270.667:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:10.674966 kernel: audit: type=1300 audit(1765561270.667:511): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff43a1110 a2=0 a3=1 items=0 ppid=2891 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:10.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:10.682479 kernel: audit: type=1327 audit(1765561270.667:511): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:10.676000 audit[3156]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:10.685375 kernel: audit: type=1325 audit(1765561270.676:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:10.676000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff43a1110 a2=0 a3=1 items=0 ppid=2891 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:10.690630 kernel: audit: type=1300 audit(1765561270.676:512): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff43a1110 a2=0 a3=1 items=0 ppid=2891 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:10.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:11.493132 update_engine[1558]: I20251212 17:41:11.493062 1558 update_attempter.cc:509] Updating boot flags... Dec 12 17:41:11.690000 audit[3172]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:11.690000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe0f83fa0 a2=0 a3=1 items=0 ppid=2891 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:11.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:11.698000 audit[3172]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:11.698000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe0f83fa0 a2=0 a3=1 items=0 ppid=2891 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:11.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:14.647000 audit[3174]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:14.649280 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:41:14.649347 kernel: audit: type=1325 audit(1765561274.647:515): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:14.647000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcf4d8130 a2=0 a3=1 items=0 ppid=2891 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:14.656580 kernel: audit: type=1300 audit(1765561274.647:515): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcf4d8130 a2=0 a3=1 items=0 ppid=2891 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:14.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:14.660422 kernel: audit: type=1327 audit(1765561274.647:515): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:14.660540 kernel: audit: type=1325 audit(1765561274.657:516): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:14.657000 audit[3174]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:14.657000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf4d8130 a2=0 a3=1 items=0 ppid=2891 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:14.668115 kernel: audit: type=1300 audit(1765561274.657:516): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf4d8130 a2=0 a3=1 items=0 ppid=2891 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:14.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:14.671943 kernel: audit: type=1327 audit(1765561274.657:516): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:15.679000 audit[3176]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:15.679000 audit[3176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc3916190 a2=0 a3=1 items=0 ppid=2891 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:15.686146 kernel: audit: type=1325 audit(1765561275.679:517): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:15.686195 kernel: audit: type=1300 audit(1765561275.679:517): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc3916190 a2=0 a3=1 items=0 ppid=2891 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:15.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:15.688398 kernel: audit: type=1327 audit(1765561275.679:517): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:15.690000 audit[3176]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:15.690000 audit[3176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3916190 a2=0 a3=1 items=0 ppid=2891 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:15.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:15.693818 kernel: audit: type=1325 audit(1765561275.690:518): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:16.394682 systemd[1]: Created slice kubepods-besteffort-pod78bfc734_26d8_458d_b4a6_209aa582dbee.slice - libcontainer container kubepods-besteffort-pod78bfc734_26d8_458d_b4a6_209aa582dbee.slice. Dec 12 17:41:16.542170 kubelet[2737]: I1212 17:41:16.542020 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/78bfc734-26d8-458d-b4a6-209aa582dbee-typha-certs\") pod \"calico-typha-6b9dfdf599-m9btz\" (UID: \"78bfc734-26d8-458d-b4a6-209aa582dbee\") " pod="calico-system/calico-typha-6b9dfdf599-m9btz" Dec 12 17:41:16.542170 kubelet[2737]: I1212 17:41:16.542069 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78bfc734-26d8-458d-b4a6-209aa582dbee-tigera-ca-bundle\") pod \"calico-typha-6b9dfdf599-m9btz\" (UID: \"78bfc734-26d8-458d-b4a6-209aa582dbee\") " pod="calico-system/calico-typha-6b9dfdf599-m9btz" Dec 12 17:41:16.542170 kubelet[2737]: I1212 17:41:16.542091 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmmb\" (UniqueName: \"kubernetes.io/projected/78bfc734-26d8-458d-b4a6-209aa582dbee-kube-api-access-lxmmb\") pod \"calico-typha-6b9dfdf599-m9btz\" (UID: \"78bfc734-26d8-458d-b4a6-209aa582dbee\") " pod="calico-system/calico-typha-6b9dfdf599-m9btz" Dec 12 17:41:16.582752 systemd[1]: Created slice kubepods-besteffort-pod2918bc8d_1c5a_4825_8aed_3481a2563a06.slice - libcontainer container kubepods-besteffort-pod2918bc8d_1c5a_4825_8aed_3481a2563a06.slice. Dec 12 17:41:16.703015 kubelet[2737]: E1212 17:41:16.702913 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:16.704244 containerd[1582]: time="2025-12-12T17:41:16.703787605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9dfdf599-m9btz,Uid:78bfc734-26d8-458d-b4a6-209aa582dbee,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:16.715000 audit[3180]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:16.715000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc83b31e0 a2=0 a3=1 items=0 ppid=2891 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:16.720000 audit[3180]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:16.720000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc83b31e0 a2=0 a3=1 items=0 ppid=2891 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:16.743836 kubelet[2737]: I1212 17:41:16.743238 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-var-lib-calico\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.743836 kubelet[2737]: I1212 17:41:16.743283 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2918bc8d-1c5a-4825-8aed-3481a2563a06-node-certs\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.743836 kubelet[2737]: I1212 17:41:16.743302 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6hg\" (UniqueName: \"kubernetes.io/projected/2918bc8d-1c5a-4825-8aed-3481a2563a06-kube-api-access-ph6hg\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.743836 kubelet[2737]: I1212 17:41:16.743320 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-cni-bin-dir\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.743836 kubelet[2737]: I1212 17:41:16.743334 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-cni-net-dir\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744166 kubelet[2737]: I1212 17:41:16.743348 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-xtables-lock\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744166 kubelet[2737]: I1212 17:41:16.743362 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-cni-log-dir\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744166 kubelet[2737]: I1212 17:41:16.743375 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2918bc8d-1c5a-4825-8aed-3481a2563a06-tigera-ca-bundle\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744166 kubelet[2737]: I1212 17:41:16.743392 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-flexvol-driver-host\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744166 kubelet[2737]: I1212 17:41:16.743405 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-lib-modules\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744274 kubelet[2737]: I1212 17:41:16.743422 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-policysync\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.744274 kubelet[2737]: I1212 17:41:16.743506 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2918bc8d-1c5a-4825-8aed-3481a2563a06-var-run-calico\") pod \"calico-node-5kscz\" (UID: \"2918bc8d-1c5a-4825-8aed-3481a2563a06\") " pod="calico-system/calico-node-5kscz" Dec 12 17:41:16.757701 containerd[1582]: time="2025-12-12T17:41:16.757611050Z" level=info msg="connecting to shim bd3390ae1c78e93995ab09a1e3abccfe4b8c1753d90fb36bb1a2dbc07e3ca74d" address="unix:///run/containerd/s/9cde638247474fd2e1767f2060d1e263cd654d9d0ad27111e02e7883e721a140" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:16.776868 kubelet[2737]: E1212 17:41:16.776702 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:16.811275 systemd[1]: Started cri-containerd-bd3390ae1c78e93995ab09a1e3abccfe4b8c1753d90fb36bb1a2dbc07e3ca74d.scope - libcontainer container bd3390ae1c78e93995ab09a1e3abccfe4b8c1753d90fb36bb1a2dbc07e3ca74d. Dec 12 17:41:16.825000 audit: BPF prog-id=149 op=LOAD Dec 12 17:41:16.826000 audit: BPF prog-id=150 op=LOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.826000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.826000 audit: BPF prog-id=151 op=LOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.826000 audit: BPF prog-id=152 op=LOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.826000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.826000 audit: BPF prog-id=151 op=UNLOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.826000 audit: BPF prog-id=153 op=LOAD Dec 12 17:41:16.826000 audit[3201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3190 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264333339306165316337386539333939356162303961316533616263 Dec 12 17:41:16.849420 kubelet[2737]: E1212 17:41:16.849387 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.849516 kubelet[2737]: W1212 17:41:16.849494 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.849541 kubelet[2737]: E1212 17:41:16.849517 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.855839 containerd[1582]: time="2025-12-12T17:41:16.855772958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9dfdf599-m9btz,Uid:78bfc734-26d8-458d-b4a6-209aa582dbee,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd3390ae1c78e93995ab09a1e3abccfe4b8c1753d90fb36bb1a2dbc07e3ca74d\"" Dec 12 17:41:16.856042 kubelet[2737]: E1212 17:41:16.856006 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.856042 kubelet[2737]: W1212 17:41:16.856040 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.856103 kubelet[2737]: E1212 17:41:16.856057 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.856504 kubelet[2737]: E1212 17:41:16.856487 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:16.857397 containerd[1582]: time="2025-12-12T17:41:16.857365907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:41:16.888135 kubelet[2737]: E1212 17:41:16.888064 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:16.888522 containerd[1582]: time="2025-12-12T17:41:16.888470185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5kscz,Uid:2918bc8d-1c5a-4825-8aed-3481a2563a06,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:16.906273 containerd[1582]: time="2025-12-12T17:41:16.906093901Z" level=info msg="connecting to shim bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50" address="unix:///run/containerd/s/ce4d8a912458221c97f9d3de59d5e86b9218b5642973bd9967da936e2c7a3dd8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:16.932195 systemd[1]: Started cri-containerd-bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50.scope - libcontainer container bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50. Dec 12 17:41:16.946332 kubelet[2737]: E1212 17:41:16.946201 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.946332 kubelet[2737]: W1212 17:41:16.946226 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.946332 kubelet[2737]: E1212 17:41:16.946245 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.946332 kubelet[2737]: I1212 17:41:16.946275 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb-socket-dir\") pod \"csi-node-driver-dpm8r\" (UID: \"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb\") " pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:16.946738 kubelet[2737]: E1212 17:41:16.946696 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.946738 kubelet[2737]: W1212 17:41:16.946713 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.946738 kubelet[2737]: E1212 17:41:16.946725 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.946924 kubelet[2737]: I1212 17:41:16.946876 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb-kubelet-dir\") pod \"csi-node-driver-dpm8r\" (UID: \"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb\") " pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:16.947220 kubelet[2737]: E1212 17:41:16.947169 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.947220 kubelet[2737]: W1212 17:41:16.947184 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.947220 kubelet[2737]: E1212 17:41:16.947206 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.947357 kubelet[2737]: I1212 17:41:16.947342 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb-registration-dir\") pod \"csi-node-driver-dpm8r\" (UID: \"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb\") " pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:16.947711 kubelet[2737]: E1212 17:41:16.947696 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.947777 kubelet[2737]: W1212 17:41:16.947764 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.947887 kubelet[2737]: E1212 17:41:16.947873 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.948112 kubelet[2737]: E1212 17:41:16.948101 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.948174 kubelet[2737]: W1212 17:41:16.948163 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.948246 kubelet[2737]: E1212 17:41:16.948235 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.948861 kubelet[2737]: E1212 17:41:16.948739 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.948861 kubelet[2737]: W1212 17:41:16.948754 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.948861 kubelet[2737]: E1212 17:41:16.948765 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.949542 kubelet[2737]: E1212 17:41:16.949409 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.949542 kubelet[2737]: W1212 17:41:16.949525 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.949542 kubelet[2737]: E1212 17:41:16.949538 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.949755 kubelet[2737]: E1212 17:41:16.949740 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.949755 kubelet[2737]: W1212 17:41:16.949753 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.949824 kubelet[2737]: E1212 17:41:16.949763 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.949864 kubelet[2737]: I1212 17:41:16.949841 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb-varrun\") pod \"csi-node-driver-dpm8r\" (UID: \"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb\") " pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:16.950026 kubelet[2737]: E1212 17:41:16.950012 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.950026 kubelet[2737]: W1212 17:41:16.950025 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.950086 kubelet[2737]: E1212 17:41:16.950038 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.950273 kubelet[2737]: E1212 17:41:16.950257 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.950273 kubelet[2737]: W1212 17:41:16.950270 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.950333 kubelet[2737]: E1212 17:41:16.950279 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.949000 audit: BPF prog-id=154 op=LOAD Dec 12 17:41:16.950473 kubelet[2737]: E1212 17:41:16.950461 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.950496 kubelet[2737]: W1212 17:41:16.950473 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.950496 kubelet[2737]: E1212 17:41:16.950480 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.950986 kubelet[2737]: E1212 17:41:16.950964 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.951030 kubelet[2737]: W1212 17:41:16.950985 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.951030 kubelet[2737]: E1212 17:41:16.950998 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.950000 audit: BPF prog-id=155 op=LOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.950000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.950000 audit: BPF prog-id=156 op=LOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.950000 audit: BPF prog-id=157 op=LOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.950000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.950000 audit: BPF prog-id=156 op=UNLOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.950000 audit: BPF prog-id=158 op=LOAD Dec 12 17:41:16.950000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613663303366383765396666376538616238343432373934303365 Dec 12 17:41:16.951783 kubelet[2737]: E1212 17:41:16.951262 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.951783 kubelet[2737]: W1212 17:41:16.951272 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.951783 kubelet[2737]: E1212 17:41:16.951282 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.951783 kubelet[2737]: I1212 17:41:16.951304 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4lc\" (UniqueName: \"kubernetes.io/projected/e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb-kube-api-access-5m4lc\") pod \"csi-node-driver-dpm8r\" (UID: \"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb\") " pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:16.951783 kubelet[2737]: E1212 17:41:16.951505 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.951783 kubelet[2737]: W1212 17:41:16.951515 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.951783 kubelet[2737]: E1212 17:41:16.951524 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.951783 kubelet[2737]: E1212 17:41:16.951712 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:16.951783 kubelet[2737]: W1212 17:41:16.951720 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:16.951977 kubelet[2737]: E1212 17:41:16.951728 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:16.972042 containerd[1582]: time="2025-12-12T17:41:16.971933530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5kscz,Uid:2918bc8d-1c5a-4825-8aed-3481a2563a06,Namespace:calico-system,Attempt:0,} returns sandbox id \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\"" Dec 12 17:41:16.973540 kubelet[2737]: E1212 17:41:16.973519 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:17.052786 kubelet[2737]: E1212 17:41:17.052744 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.052786 kubelet[2737]: W1212 17:41:17.052769 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.052786 kubelet[2737]: E1212 17:41:17.052788 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.053030 kubelet[2737]: E1212 17:41:17.052997 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.053090 kubelet[2737]: W1212 17:41:17.053031 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.053090 kubelet[2737]: E1212 17:41:17.053044 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.053295 kubelet[2737]: E1212 17:41:17.053262 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.053295 kubelet[2737]: W1212 17:41:17.053277 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.053295 kubelet[2737]: E1212 17:41:17.053287 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.053475 kubelet[2737]: E1212 17:41:17.053462 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.053475 kubelet[2737]: W1212 17:41:17.053472 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.053532 kubelet[2737]: E1212 17:41:17.053481 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.053739 kubelet[2737]: E1212 17:41:17.053726 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.053739 kubelet[2737]: W1212 17:41:17.053738 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.053822 kubelet[2737]: E1212 17:41:17.053747 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.054016 kubelet[2737]: E1212 17:41:17.053983 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.054016 kubelet[2737]: W1212 17:41:17.053998 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.054016 kubelet[2737]: E1212 17:41:17.054008 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.054237 kubelet[2737]: E1212 17:41:17.054195 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.054293 kubelet[2737]: W1212 17:41:17.054237 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.054293 kubelet[2737]: E1212 17:41:17.054255 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.054491 kubelet[2737]: E1212 17:41:17.054476 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.054491 kubelet[2737]: W1212 17:41:17.054489 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.054551 kubelet[2737]: E1212 17:41:17.054499 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.054673 kubelet[2737]: E1212 17:41:17.054660 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.054673 kubelet[2737]: W1212 17:41:17.054670 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.054744 kubelet[2737]: E1212 17:41:17.054678 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.054853 kubelet[2737]: E1212 17:41:17.054839 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.054853 kubelet[2737]: W1212 17:41:17.054851 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.054935 kubelet[2737]: E1212 17:41:17.054858 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.055028 kubelet[2737]: E1212 17:41:17.055014 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.055028 kubelet[2737]: W1212 17:41:17.055024 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.055079 kubelet[2737]: E1212 17:41:17.055032 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.055237 kubelet[2737]: E1212 17:41:17.055216 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.055237 kubelet[2737]: W1212 17:41:17.055228 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.055305 kubelet[2737]: E1212 17:41:17.055246 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.055486 kubelet[2737]: E1212 17:41:17.055475 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.055486 kubelet[2737]: W1212 17:41:17.055485 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.055549 kubelet[2737]: E1212 17:41:17.055493 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.055730 kubelet[2737]: E1212 17:41:17.055716 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.055730 kubelet[2737]: W1212 17:41:17.055729 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.055792 kubelet[2737]: E1212 17:41:17.055738 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.055920 kubelet[2737]: E1212 17:41:17.055909 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.055920 kubelet[2737]: W1212 17:41:17.055920 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.055964 kubelet[2737]: E1212 17:41:17.055928 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.056077 kubelet[2737]: E1212 17:41:17.056068 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.056077 kubelet[2737]: W1212 17:41:17.056077 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.056122 kubelet[2737]: E1212 17:41:17.056085 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.056224 kubelet[2737]: E1212 17:41:17.056215 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.056224 kubelet[2737]: W1212 17:41:17.056224 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.056280 kubelet[2737]: E1212 17:41:17.056239 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.056426 kubelet[2737]: E1212 17:41:17.056414 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.056453 kubelet[2737]: W1212 17:41:17.056426 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.056453 kubelet[2737]: E1212 17:41:17.056439 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.056615 kubelet[2737]: E1212 17:41:17.056604 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.056615 kubelet[2737]: W1212 17:41:17.056615 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.056667 kubelet[2737]: E1212 17:41:17.056654 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.057360 kubelet[2737]: E1212 17:41:17.057341 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.057360 kubelet[2737]: W1212 17:41:17.057359 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.057413 kubelet[2737]: E1212 17:41:17.057372 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.059015 kubelet[2737]: E1212 17:41:17.058995 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.059015 kubelet[2737]: W1212 17:41:17.059012 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.059138 kubelet[2737]: E1212 17:41:17.059024 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.059245 kubelet[2737]: E1212 17:41:17.059224 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.059306 kubelet[2737]: W1212 17:41:17.059292 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.059328 kubelet[2737]: E1212 17:41:17.059310 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.059557 kubelet[2737]: E1212 17:41:17.059513 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.059557 kubelet[2737]: W1212 17:41:17.059555 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.059618 kubelet[2737]: E1212 17:41:17.059566 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.059850 kubelet[2737]: E1212 17:41:17.059798 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.059880 kubelet[2737]: W1212 17:41:17.059850 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.059880 kubelet[2737]: E1212 17:41:17.059860 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.060059 kubelet[2737]: E1212 17:41:17.060048 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.060059 kubelet[2737]: W1212 17:41:17.060059 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.060107 kubelet[2737]: E1212 17:41:17.060067 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.073995 kubelet[2737]: E1212 17:41:17.073887 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:17.074264 kubelet[2737]: W1212 17:41:17.074177 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:17.074264 kubelet[2737]: E1212 17:41:17.074200 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:17.896874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384676399.mount: Deactivated successfully. Dec 12 17:41:18.579425 containerd[1582]: time="2025-12-12T17:41:18.579288009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:18.581676 containerd[1582]: time="2025-12-12T17:41:18.581544781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:41:18.582744 containerd[1582]: time="2025-12-12T17:41:18.582676468Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:18.586126 containerd[1582]: time="2025-12-12T17:41:18.586077613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:18.586850 containerd[1582]: time="2025-12-12T17:41:18.586785868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.729382226s" Dec 12 17:41:18.587008 containerd[1582]: time="2025-12-12T17:41:18.586852292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:41:18.590617 containerd[1582]: time="2025-12-12T17:41:18.590513169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:41:18.615322 containerd[1582]: time="2025-12-12T17:41:18.615128590Z" level=info msg="CreateContainer within sandbox \"bd3390ae1c78e93995ab09a1e3abccfe4b8c1753d90fb36bb1a2dbc07e3ca74d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:41:18.634646 containerd[1582]: time="2025-12-12T17:41:18.634493601Z" level=info msg="Container d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:18.675850 containerd[1582]: time="2025-12-12T17:41:18.675720801Z" level=info msg="CreateContainer within sandbox \"bd3390ae1c78e93995ab09a1e3abccfe4b8c1753d90fb36bb1a2dbc07e3ca74d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08\"" Dec 12 17:41:18.676540 containerd[1582]: time="2025-12-12T17:41:18.676313095Z" level=info msg="StartContainer for \"d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08\"" Dec 12 17:41:18.678649 containerd[1582]: time="2025-12-12T17:41:18.678233746Z" level=info msg="connecting to shim d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08" address="unix:///run/containerd/s/9cde638247474fd2e1767f2060d1e263cd654d9d0ad27111e02e7883e721a140" protocol=ttrpc version=3 Dec 12 17:41:18.704057 systemd[1]: Started cri-containerd-d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08.scope - libcontainer container d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08. Dec 12 17:41:18.716000 audit: BPF prog-id=159 op=LOAD Dec 12 17:41:18.717000 audit: BPF prog-id=160 op=LOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.717000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.717000 audit: BPF prog-id=161 op=LOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.717000 audit: BPF prog-id=162 op=LOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.717000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.717000 audit: BPF prog-id=161 op=UNLOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.717000 audit: BPF prog-id=163 op=LOAD Dec 12 17:41:18.717000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3190 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:18.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333734656462376434383862613265303537323731333730393738 Dec 12 17:41:18.747615 containerd[1582]: time="2025-12-12T17:41:18.747573786Z" level=info msg="StartContainer for \"d7374edb7d488ba2e0572713709784509e639992a94501c33e103d30c168ff08\" returns successfully" Dec 12 17:41:18.840432 kubelet[2737]: E1212 17:41:18.840314 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:18.943848 kubelet[2737]: E1212 17:41:18.943004 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:18.958859 kubelet[2737]: E1212 17:41:18.958818 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.958859 kubelet[2737]: W1212 17:41:18.958842 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.958859 kubelet[2737]: E1212 17:41:18.958863 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.960078 kubelet[2737]: E1212 17:41:18.960049 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.960154 kubelet[2737]: W1212 17:41:18.960071 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.960154 kubelet[2737]: E1212 17:41:18.960118 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.960389 kubelet[2737]: E1212 17:41:18.960315 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.960389 kubelet[2737]: W1212 17:41:18.960330 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.960389 kubelet[2737]: E1212 17:41:18.960340 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.961006 kubelet[2737]: E1212 17:41:18.960517 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.961006 kubelet[2737]: W1212 17:41:18.960526 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.961006 kubelet[2737]: E1212 17:41:18.960536 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.961413 kubelet[2737]: E1212 17:41:18.961377 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.961413 kubelet[2737]: W1212 17:41:18.961395 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.961413 kubelet[2737]: E1212 17:41:18.961408 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.962135 kubelet[2737]: E1212 17:41:18.961762 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.962135 kubelet[2737]: W1212 17:41:18.961777 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.962135 kubelet[2737]: E1212 17:41:18.961837 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.962547 kubelet[2737]: E1212 17:41:18.962479 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.962547 kubelet[2737]: W1212 17:41:18.962496 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.962547 kubelet[2737]: E1212 17:41:18.962508 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.962896 kubelet[2737]: E1212 17:41:18.962834 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.962896 kubelet[2737]: W1212 17:41:18.962850 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.964108 kubelet[2737]: E1212 17:41:18.962862 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.964382 kubelet[2737]: E1212 17:41:18.964362 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.964382 kubelet[2737]: W1212 17:41:18.964378 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.964474 kubelet[2737]: E1212 17:41:18.964391 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.964597 kubelet[2737]: E1212 17:41:18.964576 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.964597 kubelet[2737]: W1212 17:41:18.964589 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.964657 kubelet[2737]: E1212 17:41:18.964639 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.964891 kubelet[2737]: E1212 17:41:18.964841 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.964891 kubelet[2737]: W1212 17:41:18.964884 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.964990 kubelet[2737]: E1212 17:41:18.964900 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.965386 kubelet[2737]: E1212 17:41:18.965343 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.965386 kubelet[2737]: W1212 17:41:18.965372 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.965480 kubelet[2737]: E1212 17:41:18.965391 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.965844 kubelet[2737]: E1212 17:41:18.965823 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.965844 kubelet[2737]: W1212 17:41:18.965840 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.965970 kubelet[2737]: E1212 17:41:18.965856 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.966534 kubelet[2737]: E1212 17:41:18.966507 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.966534 kubelet[2737]: W1212 17:41:18.966529 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.966625 kubelet[2737]: E1212 17:41:18.966543 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.966903 kubelet[2737]: E1212 17:41:18.966881 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.966903 kubelet[2737]: W1212 17:41:18.966897 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.966903 kubelet[2737]: E1212 17:41:18.966908 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.969369 kubelet[2737]: E1212 17:41:18.969091 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.969369 kubelet[2737]: W1212 17:41:18.969112 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.969369 kubelet[2737]: E1212 17:41:18.969126 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.969735 kubelet[2737]: E1212 17:41:18.969588 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.969841 kubelet[2737]: W1212 17:41:18.969827 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.969910 kubelet[2737]: E1212 17:41:18.969888 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.970689 kubelet[2737]: E1212 17:41:18.970511 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.970689 kubelet[2737]: W1212 17:41:18.970530 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.970689 kubelet[2737]: E1212 17:41:18.970545 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.970978 kubelet[2737]: E1212 17:41:18.970962 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.971049 kubelet[2737]: W1212 17:41:18.971036 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.971186 kubelet[2737]: E1212 17:41:18.971096 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.971686 kubelet[2737]: E1212 17:41:18.971669 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.971752 kubelet[2737]: W1212 17:41:18.971739 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.971826 kubelet[2737]: E1212 17:41:18.971790 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.972087 kubelet[2737]: E1212 17:41:18.972074 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.972207 kubelet[2737]: W1212 17:41:18.972144 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.972207 kubelet[2737]: E1212 17:41:18.972162 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.972559 kubelet[2737]: E1212 17:41:18.972440 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.972559 kubelet[2737]: W1212 17:41:18.972453 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.972559 kubelet[2737]: E1212 17:41:18.972463 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.972711 kubelet[2737]: E1212 17:41:18.972698 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.972761 kubelet[2737]: W1212 17:41:18.972751 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.972841 kubelet[2737]: E1212 17:41:18.972828 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.973232 kubelet[2737]: E1212 17:41:18.973170 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.973232 kubelet[2737]: W1212 17:41:18.973184 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.973232 kubelet[2737]: E1212 17:41:18.973194 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.973719 kubelet[2737]: E1212 17:41:18.973702 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.973838 kubelet[2737]: W1212 17:41:18.973823 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.974233 kubelet[2737]: E1212 17:41:18.974079 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.974588 kubelet[2737]: E1212 17:41:18.974478 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.974588 kubelet[2737]: W1212 17:41:18.974505 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.974588 kubelet[2737]: E1212 17:41:18.974518 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.975961 kubelet[2737]: E1212 17:41:18.975915 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.976604 kubelet[2737]: W1212 17:41:18.976526 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.976604 kubelet[2737]: E1212 17:41:18.976556 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.982485 kubelet[2737]: E1212 17:41:18.982457 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.982850 kubelet[2737]: W1212 17:41:18.982829 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.983530 kubelet[2737]: E1212 17:41:18.982969 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.984299 kubelet[2737]: E1212 17:41:18.984281 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.984981 kubelet[2737]: W1212 17:41:18.984787 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.984981 kubelet[2737]: E1212 17:41:18.984841 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.986081 kubelet[2737]: E1212 17:41:18.985977 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.986233 kubelet[2737]: W1212 17:41:18.986153 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.986338 kubelet[2737]: E1212 17:41:18.986317 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.987398 kubelet[2737]: E1212 17:41:18.987357 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.987398 kubelet[2737]: W1212 17:41:18.987373 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.987398 kubelet[2737]: E1212 17:41:18.987384 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.988552 kubelet[2737]: E1212 17:41:18.988493 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.988552 kubelet[2737]: W1212 17:41:18.988508 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.988552 kubelet[2737]: E1212 17:41:18.988520 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:18.998735 kubelet[2737]: E1212 17:41:18.998636 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:18.998735 kubelet[2737]: W1212 17:41:18.998678 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:18.998735 kubelet[2737]: E1212 17:41:18.998696 2737 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:19.457360 containerd[1582]: time="2025-12-12T17:41:19.457315510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:19.458112 containerd[1582]: time="2025-12-12T17:41:19.458043001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4263307" Dec 12 17:41:19.459052 containerd[1582]: time="2025-12-12T17:41:19.459027260Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:19.461184 containerd[1582]: time="2025-12-12T17:41:19.461153151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:19.462015 containerd[1582]: time="2025-12-12T17:41:19.461981597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 871.427813ms" Dec 12 17:41:19.462062 containerd[1582]: time="2025-12-12T17:41:19.462021330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:41:19.466095 containerd[1582]: time="2025-12-12T17:41:19.466058240Z" level=info msg="CreateContainer within sandbox \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:41:19.476840 containerd[1582]: time="2025-12-12T17:41:19.476182284Z" level=info msg="Container 497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:19.483638 containerd[1582]: time="2025-12-12T17:41:19.483579751Z" level=info msg="CreateContainer within sandbox \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975\"" Dec 12 17:41:19.484328 containerd[1582]: time="2025-12-12T17:41:19.484278511Z" level=info msg="StartContainer for \"497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975\"" Dec 12 17:41:19.488616 containerd[1582]: time="2025-12-12T17:41:19.488550021Z" level=info msg="connecting to shim 497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975" address="unix:///run/containerd/s/ce4d8a912458221c97f9d3de59d5e86b9218b5642973bd9967da936e2c7a3dd8" protocol=ttrpc version=3 Dec 12 17:41:19.516101 systemd[1]: Started cri-containerd-497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975.scope - libcontainer container 497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975. Dec 12 17:41:19.565000 audit: BPF prog-id=164 op=LOAD Dec 12 17:41:19.565000 audit[3405]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3237 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373532306363626533376539336537396663626631383635623963 Dec 12 17:41:19.565000 audit: BPF prog-id=165 op=LOAD Dec 12 17:41:19.565000 audit[3405]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3237 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373532306363626533376539336537396663626631383635623963 Dec 12 17:41:19.565000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:41:19.565000 audit[3405]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373532306363626533376539336537396663626631383635623963 Dec 12 17:41:19.565000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:41:19.565000 audit[3405]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373532306363626533376539336537396663626631383635623963 Dec 12 17:41:19.565000 audit: BPF prog-id=166 op=LOAD Dec 12 17:41:19.565000 audit[3405]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3237 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373532306363626533376539336537396663626631383635623963 Dec 12 17:41:19.600277 systemd[1]: cri-containerd-497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975.scope: Deactivated successfully. Dec 12 17:41:19.604000 audit: BPF prog-id=166 op=UNLOAD Dec 12 17:41:19.609109 containerd[1582]: time="2025-12-12T17:41:19.609011964Z" level=info msg="StartContainer for \"497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975\" returns successfully" Dec 12 17:41:19.616059 containerd[1582]: time="2025-12-12T17:41:19.615864483Z" level=info msg="received container exit event container_id:\"497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975\" id:\"497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975\" pid:3421 exited_at:{seconds:1765561279 nanos:614346480}" Dec 12 17:41:19.657525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-497520ccbe37e93e79fcbf1865b9c7e857e8c1e5d431e0725945224cbc620975-rootfs.mount: Deactivated successfully. Dec 12 17:41:19.760328 kubelet[2737]: I1212 17:41:19.759788 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b9dfdf599-m9btz" podStartSLOduration=2.027343919 podStartE2EDuration="3.759766893s" podCreationTimestamp="2025-12-12 17:41:16 +0000 UTC" firstStartedPulling="2025-12-12 17:41:16.857001363 +0000 UTC m=+22.125964455" lastFinishedPulling="2025-12-12 17:41:18.589424337 +0000 UTC m=+23.858387429" observedRunningTime="2025-12-12 17:41:18.976950994 +0000 UTC m=+24.245914087" watchObservedRunningTime="2025-12-12 17:41:19.759766893 +0000 UTC m=+25.028729945" Dec 12 17:41:19.783000 audit[3456]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:19.785138 kernel: kauditd_printk_skb: 90 callbacks suppressed Dec 12 17:41:19.785228 kernel: audit: type=1325 audit(1765561279.783:551): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:19.783000 audit[3456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff74616e0 a2=0 a3=1 items=0 ppid=2891 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.793262 kernel: audit: type=1300 audit(1765561279.783:551): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff74616e0 a2=0 a3=1 items=0 ppid=2891 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:19.795347 kernel: audit: type=1327 audit(1765561279.783:551): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:19.798000 audit[3456]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=3456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:19.798000 audit[3456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff74616e0 a2=0 a3=1 items=0 ppid=2891 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.805279 kernel: audit: type=1325 audit(1765561279.798:552): table=nat:116 family=2 entries=19 op=nft_register_chain pid=3456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:19.805354 kernel: audit: type=1300 audit(1765561279.798:552): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff74616e0 a2=0 a3=1 items=0 ppid=2891 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:19.805386 kernel: audit: type=1327 audit(1765561279.798:552): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:19.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:19.947492 kubelet[2737]: E1212 17:41:19.947250 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:19.947492 kubelet[2737]: E1212 17:41:19.947321 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:19.948400 containerd[1582]: time="2025-12-12T17:41:19.948365768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:41:20.841834 kubelet[2737]: E1212 17:41:20.841581 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:20.949493 kubelet[2737]: E1212 17:41:20.949460 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:22.840501 kubelet[2737]: E1212 17:41:22.840444 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:23.488004 containerd[1582]: time="2025-12-12T17:41:23.487948101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:23.489317 containerd[1582]: time="2025-12-12T17:41:23.489265403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:41:23.490678 containerd[1582]: time="2025-12-12T17:41:23.490391970Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:23.492691 containerd[1582]: time="2025-12-12T17:41:23.492659189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:23.493794 containerd[1582]: time="2025-12-12T17:41:23.493761589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.545254852s" Dec 12 17:41:23.493934 containerd[1582]: time="2025-12-12T17:41:23.493913793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:41:23.499343 containerd[1582]: time="2025-12-12T17:41:23.498881435Z" level=info msg="CreateContainer within sandbox \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:41:23.506106 containerd[1582]: time="2025-12-12T17:41:23.506058119Z" level=info msg="Container 4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:23.514989 containerd[1582]: time="2025-12-12T17:41:23.514947020Z" level=info msg="CreateContainer within sandbox \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89\"" Dec 12 17:41:23.517037 containerd[1582]: time="2025-12-12T17:41:23.516997136Z" level=info msg="StartContainer for \"4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89\"" Dec 12 17:41:23.518937 containerd[1582]: time="2025-12-12T17:41:23.518902409Z" level=info msg="connecting to shim 4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89" address="unix:///run/containerd/s/ce4d8a912458221c97f9d3de59d5e86b9218b5642973bd9967da936e2c7a3dd8" protocol=ttrpc version=3 Dec 12 17:41:23.556004 systemd[1]: Started cri-containerd-4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89.scope - libcontainer container 4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89. Dec 12 17:41:23.627000 audit: BPF prog-id=167 op=LOAD Dec 12 17:41:23.627000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3237 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:23.632473 kernel: audit: type=1334 audit(1765561283.627:553): prog-id=167 op=LOAD Dec 12 17:41:23.632571 kernel: audit: type=1300 audit(1765561283.627:553): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3237 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:23.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464623339626663313535646234623730663462636561616232376533 Dec 12 17:41:23.635828 kernel: audit: type=1327 audit(1765561283.627:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464623339626663313535646234623730663462636561616232376533 Dec 12 17:41:23.627000 audit: BPF prog-id=168 op=LOAD Dec 12 17:41:23.636816 kernel: audit: type=1334 audit(1765561283.627:554): prog-id=168 op=LOAD Dec 12 17:41:23.627000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3237 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:23.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464623339626663313535646234623730663462636561616232376533 Dec 12 17:41:23.627000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:41:23.627000 audit[3470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:23.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464623339626663313535646234623730663462636561616232376533 Dec 12 17:41:23.627000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:41:23.627000 audit[3470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:23.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464623339626663313535646234623730663462636561616232376533 Dec 12 17:41:23.627000 audit: BPF prog-id=169 op=LOAD Dec 12 17:41:23.627000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3237 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:23.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464623339626663313535646234623730663462636561616232376533 Dec 12 17:41:23.655433 containerd[1582]: time="2025-12-12T17:41:23.655371515Z" level=info msg="StartContainer for \"4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89\" returns successfully" Dec 12 17:41:23.960485 kubelet[2737]: E1212 17:41:23.959471 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:24.177906 systemd[1]: cri-containerd-4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89.scope: Deactivated successfully. Dec 12 17:41:24.178481 systemd[1]: cri-containerd-4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89.scope: Consumed 465ms CPU time, 174.9M memory peak, 2.4M read from disk, 165.9M written to disk. Dec 12 17:41:24.181113 containerd[1582]: time="2025-12-12T17:41:24.181077698Z" level=info msg="received container exit event container_id:\"4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89\" id:\"4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89\" pid:3482 exited_at:{seconds:1765561284 nanos:180907170}" Dec 12 17:41:24.180000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:41:24.199267 kubelet[2737]: I1212 17:41:24.199236 2737 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:41:24.205701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4db39bfc155db4b70f4bceaab27e31c3bb2a178df1144ad468eae8f783c2fa89-rootfs.mount: Deactivated successfully. Dec 12 17:41:24.271649 systemd[1]: Created slice kubepods-besteffort-pod8b64634c_702a_4400_b949_31628cf96118.slice - libcontainer container kubepods-besteffort-pod8b64634c_702a_4400_b949_31628cf96118.slice. Dec 12 17:41:24.283235 systemd[1]: Created slice kubepods-besteffort-pod59335d98_4d93_4df2_bc4b_c4c82b6bcd24.slice - libcontainer container kubepods-besteffort-pod59335d98_4d93_4df2_bc4b_c4c82b6bcd24.slice. Dec 12 17:41:24.292307 systemd[1]: Created slice kubepods-burstable-pod117d04d2_944b_4a0a_a3e5_7cfa981b4f19.slice - libcontainer container kubepods-burstable-pod117d04d2_944b_4a0a_a3e5_7cfa981b4f19.slice. Dec 12 17:41:24.301656 systemd[1]: Created slice kubepods-burstable-pod9ff4b695_b070_4674_a703_cd00568559f5.slice - libcontainer container kubepods-burstable-pod9ff4b695_b070_4674_a703_cd00568559f5.slice. Dec 12 17:41:24.307670 systemd[1]: Created slice kubepods-besteffort-pod6276624d_bee7_4066_84ff_d2e0529aa160.slice - libcontainer container kubepods-besteffort-pod6276624d_bee7_4066_84ff_d2e0529aa160.slice. Dec 12 17:41:24.313018 systemd[1]: Created slice kubepods-besteffort-poda763186a_19f9_4afb_a531_5c752f0913e7.slice - libcontainer container kubepods-besteffort-poda763186a_19f9_4afb_a531_5c752f0913e7.slice. Dec 12 17:41:24.318649 kubelet[2737]: I1212 17:41:24.317157 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42e96bc1-b1e8-4725-afa1-530d18ed87af-config\") pod \"goldmane-7c778bb748-glc24\" (UID: \"42e96bc1-b1e8-4725-afa1-530d18ed87af\") " pod="calico-system/goldmane-7c778bb748-glc24" Dec 12 17:41:24.318649 kubelet[2737]: I1212 17:41:24.317191 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7qd\" (UniqueName: \"kubernetes.io/projected/42e96bc1-b1e8-4725-afa1-530d18ed87af-kube-api-access-lh7qd\") pod \"goldmane-7c778bb748-glc24\" (UID: \"42e96bc1-b1e8-4725-afa1-530d18ed87af\") " pod="calico-system/goldmane-7c778bb748-glc24" Dec 12 17:41:24.318649 kubelet[2737]: I1212 17:41:24.317212 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/117d04d2-944b-4a0a-a3e5-7cfa981b4f19-config-volume\") pod \"coredns-66bc5c9577-79smq\" (UID: \"117d04d2-944b-4a0a-a3e5-7cfa981b4f19\") " pod="kube-system/coredns-66bc5c9577-79smq" Dec 12 17:41:24.318649 kubelet[2737]: I1212 17:41:24.317230 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6276624d-bee7-4066-84ff-d2e0529aa160-calico-apiserver-certs\") pod \"calico-apiserver-8ff6bd4cf-74cxw\" (UID: \"6276624d-bee7-4066-84ff-d2e0529aa160\") " pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" Dec 12 17:41:24.318649 kubelet[2737]: I1212 17:41:24.317252 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl969\" (UniqueName: \"kubernetes.io/projected/9ff4b695-b070-4674-a703-cd00568559f5-kube-api-access-dl969\") pod \"coredns-66bc5c9577-7ln7x\" (UID: \"9ff4b695-b070-4674-a703-cd00568559f5\") " pod="kube-system/coredns-66bc5c9577-7ln7x" Dec 12 17:41:24.319211 kubelet[2737]: I1212 17:41:24.317288 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42e96bc1-b1e8-4725-afa1-530d18ed87af-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-glc24\" (UID: \"42e96bc1-b1e8-4725-afa1-530d18ed87af\") " pod="calico-system/goldmane-7c778bb748-glc24" Dec 12 17:41:24.319211 kubelet[2737]: I1212 17:41:24.317324 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jwrh\" (UniqueName: \"kubernetes.io/projected/117d04d2-944b-4a0a-a3e5-7cfa981b4f19-kube-api-access-8jwrh\") pod \"coredns-66bc5c9577-79smq\" (UID: \"117d04d2-944b-4a0a-a3e5-7cfa981b4f19\") " pod="kube-system/coredns-66bc5c9577-79smq" Dec 12 17:41:24.319211 kubelet[2737]: I1212 17:41:24.317398 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5prz9\" (UniqueName: \"kubernetes.io/projected/8b64634c-702a-4400-b949-31628cf96118-kube-api-access-5prz9\") pod \"calico-kube-controllers-64f8b9c58f-6dld8\" (UID: \"8b64634c-702a-4400-b949-31628cf96118\") " pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" Dec 12 17:41:24.319211 kubelet[2737]: I1212 17:41:24.317424 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfvz\" (UniqueName: \"kubernetes.io/projected/a763186a-19f9-4afb-a531-5c752f0913e7-kube-api-access-nsfvz\") pod \"whisker-9cdcc98d6-crbhs\" (UID: \"a763186a-19f9-4afb-a531-5c752f0913e7\") " pod="calico-system/whisker-9cdcc98d6-crbhs" Dec 12 17:41:24.319211 kubelet[2737]: I1212 17:41:24.317466 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff4b695-b070-4674-a703-cd00568559f5-config-volume\") pod \"coredns-66bc5c9577-7ln7x\" (UID: \"9ff4b695-b070-4674-a703-cd00568559f5\") " pod="kube-system/coredns-66bc5c9577-7ln7x" Dec 12 17:41:24.318875 systemd[1]: Created slice kubepods-besteffort-pod0d34868c_1018_4669_b30b_dbcccb35f648.slice - libcontainer container kubepods-besteffort-pod0d34868c_1018_4669_b30b_dbcccb35f648.slice. Dec 12 17:41:24.319424 kubelet[2737]: I1212 17:41:24.317485 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-ca-bundle\") pod \"whisker-9cdcc98d6-crbhs\" (UID: \"a763186a-19f9-4afb-a531-5c752f0913e7\") " pod="calico-system/whisker-9cdcc98d6-crbhs" Dec 12 17:41:24.319424 kubelet[2737]: I1212 17:41:24.317508 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d34868c-1018-4669-b30b-dbcccb35f648-calico-apiserver-certs\") pod \"calico-apiserver-6d47949b7b-87fxs\" (UID: \"0d34868c-1018-4669-b30b-dbcccb35f648\") " pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" Dec 12 17:41:24.319424 kubelet[2737]: I1212 17:41:24.317534 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/42e96bc1-b1e8-4725-afa1-530d18ed87af-goldmane-key-pair\") pod \"goldmane-7c778bb748-glc24\" (UID: \"42e96bc1-b1e8-4725-afa1-530d18ed87af\") " pod="calico-system/goldmane-7c778bb748-glc24" Dec 12 17:41:24.319424 kubelet[2737]: I1212 17:41:24.317561 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-backend-key-pair\") pod \"whisker-9cdcc98d6-crbhs\" (UID: \"a763186a-19f9-4afb-a531-5c752f0913e7\") " pod="calico-system/whisker-9cdcc98d6-crbhs" Dec 12 17:41:24.319424 kubelet[2737]: I1212 17:41:24.317577 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lppcs\" (UniqueName: \"kubernetes.io/projected/59335d98-4d93-4df2-bc4b-c4c82b6bcd24-kube-api-access-lppcs\") pod \"calico-apiserver-8ff6bd4cf-pcmpc\" (UID: \"59335d98-4d93-4df2-bc4b-c4c82b6bcd24\") " pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" Dec 12 17:41:24.319657 kubelet[2737]: I1212 17:41:24.317604 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgh44\" (UniqueName: \"kubernetes.io/projected/0d34868c-1018-4669-b30b-dbcccb35f648-kube-api-access-tgh44\") pod \"calico-apiserver-6d47949b7b-87fxs\" (UID: \"0d34868c-1018-4669-b30b-dbcccb35f648\") " pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" Dec 12 17:41:24.319657 kubelet[2737]: I1212 17:41:24.317620 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/59335d98-4d93-4df2-bc4b-c4c82b6bcd24-calico-apiserver-certs\") pod \"calico-apiserver-8ff6bd4cf-pcmpc\" (UID: \"59335d98-4d93-4df2-bc4b-c4c82b6bcd24\") " pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" Dec 12 17:41:24.319657 kubelet[2737]: I1212 17:41:24.317639 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdvd\" (UniqueName: \"kubernetes.io/projected/6276624d-bee7-4066-84ff-d2e0529aa160-kube-api-access-7vdvd\") pod \"calico-apiserver-8ff6bd4cf-74cxw\" (UID: \"6276624d-bee7-4066-84ff-d2e0529aa160\") " pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" Dec 12 17:41:24.319657 kubelet[2737]: I1212 17:41:24.317654 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b64634c-702a-4400-b949-31628cf96118-tigera-ca-bundle\") pod \"calico-kube-controllers-64f8b9c58f-6dld8\" (UID: \"8b64634c-702a-4400-b949-31628cf96118\") " pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" Dec 12 17:41:24.324751 systemd[1]: Created slice kubepods-besteffort-pod42e96bc1_b1e8_4725_afa1_530d18ed87af.slice - libcontainer container kubepods-besteffort-pod42e96bc1_b1e8_4725_afa1_530d18ed87af.slice. Dec 12 17:41:24.580690 containerd[1582]: time="2025-12-12T17:41:24.580553530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f8b9c58f-6dld8,Uid:8b64634c-702a-4400-b949-31628cf96118,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:24.590873 containerd[1582]: time="2025-12-12T17:41:24.590344981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-pcmpc,Uid:59335d98-4d93-4df2-bc4b-c4c82b6bcd24,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:24.598634 kubelet[2737]: E1212 17:41:24.598598 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:24.599485 containerd[1582]: time="2025-12-12T17:41:24.599431516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-79smq,Uid:117d04d2-944b-4a0a-a3e5-7cfa981b4f19,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:24.606477 kubelet[2737]: E1212 17:41:24.606440 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:24.607200 containerd[1582]: time="2025-12-12T17:41:24.607156791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7ln7x,Uid:9ff4b695-b070-4674-a703-cd00568559f5,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:24.614736 containerd[1582]: time="2025-12-12T17:41:24.614683650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-74cxw,Uid:6276624d-bee7-4066-84ff-d2e0529aa160,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:24.619159 containerd[1582]: time="2025-12-12T17:41:24.619121208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9cdcc98d6-crbhs,Uid:a763186a-19f9-4afb-a531-5c752f0913e7,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:24.630256 containerd[1582]: time="2025-12-12T17:41:24.630217463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d47949b7b-87fxs,Uid:0d34868c-1018-4669-b30b-dbcccb35f648,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:24.631868 containerd[1582]: time="2025-12-12T17:41:24.631836675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-glc24,Uid:42e96bc1-b1e8-4725-afa1-530d18ed87af,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:24.707042 containerd[1582]: time="2025-12-12T17:41:24.706929342Z" level=error msg="Failed to destroy network for sandbox \"25a7649b03100a9c048a31644e105699fa3d8ffa788a7fbbb02d2a50f6fca8a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.712234 containerd[1582]: time="2025-12-12T17:41:24.712140435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f8b9c58f-6dld8,Uid:8b64634c-702a-4400-b949-31628cf96118,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a7649b03100a9c048a31644e105699fa3d8ffa788a7fbbb02d2a50f6fca8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.714461 kubelet[2737]: E1212 17:41:24.714330 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a7649b03100a9c048a31644e105699fa3d8ffa788a7fbbb02d2a50f6fca8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.714461 kubelet[2737]: E1212 17:41:24.714418 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a7649b03100a9c048a31644e105699fa3d8ffa788a7fbbb02d2a50f6fca8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" Dec 12 17:41:24.714461 kubelet[2737]: E1212 17:41:24.714436 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a7649b03100a9c048a31644e105699fa3d8ffa788a7fbbb02d2a50f6fca8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" Dec 12 17:41:24.716135 kubelet[2737]: E1212 17:41:24.714496 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64f8b9c58f-6dld8_calico-system(8b64634c-702a-4400-b949-31628cf96118)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64f8b9c58f-6dld8_calico-system(8b64634c-702a-4400-b949-31628cf96118)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25a7649b03100a9c048a31644e105699fa3d8ffa788a7fbbb02d2a50f6fca8a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" podUID="8b64634c-702a-4400-b949-31628cf96118" Dec 12 17:41:24.718092 containerd[1582]: time="2025-12-12T17:41:24.718038000Z" level=error msg="Failed to destroy network for sandbox \"428544883df5e532d562b9ae1a2911de448fd9a3452ed5bceb45ee5afe52bb36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.721167 containerd[1582]: time="2025-12-12T17:41:24.721099614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-79smq,Uid:117d04d2-944b-4a0a-a3e5-7cfa981b4f19,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"428544883df5e532d562b9ae1a2911de448fd9a3452ed5bceb45ee5afe52bb36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.725583 kubelet[2737]: E1212 17:41:24.721853 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428544883df5e532d562b9ae1a2911de448fd9a3452ed5bceb45ee5afe52bb36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.725583 kubelet[2737]: E1212 17:41:24.721916 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428544883df5e532d562b9ae1a2911de448fd9a3452ed5bceb45ee5afe52bb36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-79smq" Dec 12 17:41:24.725583 kubelet[2737]: E1212 17:41:24.721936 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428544883df5e532d562b9ae1a2911de448fd9a3452ed5bceb45ee5afe52bb36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-79smq" Dec 12 17:41:24.728029 kubelet[2737]: E1212 17:41:24.721984 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-79smq_kube-system(117d04d2-944b-4a0a-a3e5-7cfa981b4f19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-79smq_kube-system(117d04d2-944b-4a0a-a3e5-7cfa981b4f19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"428544883df5e532d562b9ae1a2911de448fd9a3452ed5bceb45ee5afe52bb36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-79smq" podUID="117d04d2-944b-4a0a-a3e5-7cfa981b4f19" Dec 12 17:41:24.736551 containerd[1582]: time="2025-12-12T17:41:24.736507712Z" level=error msg="Failed to destroy network for sandbox \"e8e927cb421b92b4257c781f5c959f49aee2e9700a9219169af4f7f311403865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.740566 containerd[1582]: time="2025-12-12T17:41:24.740522912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-pcmpc,Uid:59335d98-4d93-4df2-bc4b-c4c82b6bcd24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e927cb421b92b4257c781f5c959f49aee2e9700a9219169af4f7f311403865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.741625 kubelet[2737]: E1212 17:41:24.741201 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e927cb421b92b4257c781f5c959f49aee2e9700a9219169af4f7f311403865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.741625 kubelet[2737]: E1212 17:41:24.741268 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e927cb421b92b4257c781f5c959f49aee2e9700a9219169af4f7f311403865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" Dec 12 17:41:24.741625 kubelet[2737]: E1212 17:41:24.741289 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e927cb421b92b4257c781f5c959f49aee2e9700a9219169af4f7f311403865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" Dec 12 17:41:24.741784 kubelet[2737]: E1212 17:41:24.741585 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8ff6bd4cf-pcmpc_calico-apiserver(59335d98-4d93-4df2-bc4b-c4c82b6bcd24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8ff6bd4cf-pcmpc_calico-apiserver(59335d98-4d93-4df2-bc4b-c4c82b6bcd24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e927cb421b92b4257c781f5c959f49aee2e9700a9219169af4f7f311403865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" podUID="59335d98-4d93-4df2-bc4b-c4c82b6bcd24" Dec 12 17:41:24.744938 containerd[1582]: time="2025-12-12T17:41:24.736360791Z" level=error msg="Failed to destroy network for sandbox \"06b98b2579df614dfad8c85dcb2d13e4dc3a7f5d1fd25a4eeca84c4d727a9072\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.747870 containerd[1582]: time="2025-12-12T17:41:24.747813506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-74cxw,Uid:6276624d-bee7-4066-84ff-d2e0529aa160,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b98b2579df614dfad8c85dcb2d13e4dc3a7f5d1fd25a4eeca84c4d727a9072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.748578 kubelet[2737]: E1212 17:41:24.748071 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b98b2579df614dfad8c85dcb2d13e4dc3a7f5d1fd25a4eeca84c4d727a9072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.748578 kubelet[2737]: E1212 17:41:24.748156 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b98b2579df614dfad8c85dcb2d13e4dc3a7f5d1fd25a4eeca84c4d727a9072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" Dec 12 17:41:24.748578 kubelet[2737]: E1212 17:41:24.748174 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b98b2579df614dfad8c85dcb2d13e4dc3a7f5d1fd25a4eeca84c4d727a9072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" Dec 12 17:41:24.748746 kubelet[2737]: E1212 17:41:24.748242 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8ff6bd4cf-74cxw_calico-apiserver(6276624d-bee7-4066-84ff-d2e0529aa160)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8ff6bd4cf-74cxw_calico-apiserver(6276624d-bee7-4066-84ff-d2e0529aa160)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06b98b2579df614dfad8c85dcb2d13e4dc3a7f5d1fd25a4eeca84c4d727a9072\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" podUID="6276624d-bee7-4066-84ff-d2e0529aa160" Dec 12 17:41:24.750003 containerd[1582]: time="2025-12-12T17:41:24.749964506Z" level=error msg="Failed to destroy network for sandbox \"12ffaa7864084bb9a3c2df57e0682ecb6350f2ca5e8546f89f9810259e164390\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.750835 containerd[1582]: time="2025-12-12T17:41:24.750771811Z" level=error msg="Failed to destroy network for sandbox \"bf5845067812531377bfaa8184436c5cb317074bf66ecb3570f6952f24476122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.752483 containerd[1582]: time="2025-12-12T17:41:24.752443638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9cdcc98d6-crbhs,Uid:a763186a-19f9-4afb-a531-5c752f0913e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12ffaa7864084bb9a3c2df57e0682ecb6350f2ca5e8546f89f9810259e164390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.752839 kubelet[2737]: E1212 17:41:24.752692 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12ffaa7864084bb9a3c2df57e0682ecb6350f2ca5e8546f89f9810259e164390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.752839 kubelet[2737]: E1212 17:41:24.752782 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12ffaa7864084bb9a3c2df57e0682ecb6350f2ca5e8546f89f9810259e164390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9cdcc98d6-crbhs" Dec 12 17:41:24.753027 kubelet[2737]: E1212 17:41:24.752797 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12ffaa7864084bb9a3c2df57e0682ecb6350f2ca5e8546f89f9810259e164390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9cdcc98d6-crbhs" Dec 12 17:41:24.753970 kubelet[2737]: E1212 17:41:24.753012 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9cdcc98d6-crbhs_calico-system(a763186a-19f9-4afb-a531-5c752f0913e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9cdcc98d6-crbhs_calico-system(a763186a-19f9-4afb-a531-5c752f0913e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12ffaa7864084bb9a3c2df57e0682ecb6350f2ca5e8546f89f9810259e164390\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9cdcc98d6-crbhs" podUID="a763186a-19f9-4afb-a531-5c752f0913e7" Dec 12 17:41:24.755001 containerd[1582]: time="2025-12-12T17:41:24.754955378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7ln7x,Uid:9ff4b695-b070-4674-a703-cd00568559f5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5845067812531377bfaa8184436c5cb317074bf66ecb3570f6952f24476122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.755212 kubelet[2737]: E1212 17:41:24.755184 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5845067812531377bfaa8184436c5cb317074bf66ecb3570f6952f24476122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.755322 kubelet[2737]: E1212 17:41:24.755305 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5845067812531377bfaa8184436c5cb317074bf66ecb3570f6952f24476122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7ln7x" Dec 12 17:41:24.755458 kubelet[2737]: E1212 17:41:24.755378 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5845067812531377bfaa8184436c5cb317074bf66ecb3570f6952f24476122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7ln7x" Dec 12 17:41:24.755458 kubelet[2737]: E1212 17:41:24.755427 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7ln7x_kube-system(9ff4b695-b070-4674-a703-cd00568559f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7ln7x_kube-system(9ff4b695-b070-4674-a703-cd00568559f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf5845067812531377bfaa8184436c5cb317074bf66ecb3570f6952f24476122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7ln7x" podUID="9ff4b695-b070-4674-a703-cd00568559f5" Dec 12 17:41:24.763151 containerd[1582]: time="2025-12-12T17:41:24.762619076Z" level=error msg="Failed to destroy network for sandbox \"e0bd5acfcd33be3df99ea0a5f79ac5e176505a1ec9f432fb7d87e31250662517\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.765973 containerd[1582]: time="2025-12-12T17:41:24.765923878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d47949b7b-87fxs,Uid:0d34868c-1018-4669-b30b-dbcccb35f648,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bd5acfcd33be3df99ea0a5f79ac5e176505a1ec9f432fb7d87e31250662517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.766186 kubelet[2737]: E1212 17:41:24.766148 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bd5acfcd33be3df99ea0a5f79ac5e176505a1ec9f432fb7d87e31250662517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.766232 kubelet[2737]: E1212 17:41:24.766201 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bd5acfcd33be3df99ea0a5f79ac5e176505a1ec9f432fb7d87e31250662517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" Dec 12 17:41:24.766232 kubelet[2737]: E1212 17:41:24.766218 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bd5acfcd33be3df99ea0a5f79ac5e176505a1ec9f432fb7d87e31250662517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" Dec 12 17:41:24.766292 kubelet[2737]: E1212 17:41:24.766266 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d47949b7b-87fxs_calico-apiserver(0d34868c-1018-4669-b30b-dbcccb35f648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d47949b7b-87fxs_calico-apiserver(0d34868c-1018-4669-b30b-dbcccb35f648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0bd5acfcd33be3df99ea0a5f79ac5e176505a1ec9f432fb7d87e31250662517\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" podUID="0d34868c-1018-4669-b30b-dbcccb35f648" Dec 12 17:41:24.768180 containerd[1582]: time="2025-12-12T17:41:24.768148899Z" level=error msg="Failed to destroy network for sandbox \"962ea8b6fbc5a174483de91e29fec97cbb1c2593a057479630a975de3612ebec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.770636 containerd[1582]: time="2025-12-12T17:41:24.769658040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-glc24,Uid:42e96bc1-b1e8-4725-afa1-530d18ed87af,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ea8b6fbc5a174483de91e29fec97cbb1c2593a057479630a975de3612ebec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.770926 kubelet[2737]: E1212 17:41:24.770885 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ea8b6fbc5a174483de91e29fec97cbb1c2593a057479630a975de3612ebec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.770979 kubelet[2737]: E1212 17:41:24.770937 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ea8b6fbc5a174483de91e29fec97cbb1c2593a057479630a975de3612ebec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-glc24" Dec 12 17:41:24.770979 kubelet[2737]: E1212 17:41:24.770957 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"962ea8b6fbc5a174483de91e29fec97cbb1c2593a057479630a975de3612ebec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-glc24" Dec 12 17:41:24.771041 kubelet[2737]: E1212 17:41:24.771005 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-glc24_calico-system(42e96bc1-b1e8-4725-afa1-530d18ed87af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-glc24_calico-system(42e96bc1-b1e8-4725-afa1-530d18ed87af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"962ea8b6fbc5a174483de91e29fec97cbb1c2593a057479630a975de3612ebec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-glc24" podUID="42e96bc1-b1e8-4725-afa1-530d18ed87af" Dec 12 17:41:24.846162 systemd[1]: Created slice kubepods-besteffort-pode9842b9a_1f71_48b6_9feb_99cf8f1cbcdb.slice - libcontainer container kubepods-besteffort-pode9842b9a_1f71_48b6_9feb_99cf8f1cbcdb.slice. Dec 12 17:41:24.850750 containerd[1582]: time="2025-12-12T17:41:24.850673559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpm8r,Uid:e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:24.892625 containerd[1582]: time="2025-12-12T17:41:24.892578048Z" level=error msg="Failed to destroy network for sandbox \"1fa8a1e32b5a5cfe1c2e3fbb35613f8b7630ffefdb2d56d770f8b3dcf73132d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.894340 containerd[1582]: time="2025-12-12T17:41:24.894295287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpm8r,Uid:e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa8a1e32b5a5cfe1c2e3fbb35613f8b7630ffefdb2d56d770f8b3dcf73132d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.894561 kubelet[2737]: E1212 17:41:24.894520 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa8a1e32b5a5cfe1c2e3fbb35613f8b7630ffefdb2d56d770f8b3dcf73132d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:24.894620 kubelet[2737]: E1212 17:41:24.894580 2737 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa8a1e32b5a5cfe1c2e3fbb35613f8b7630ffefdb2d56d770f8b3dcf73132d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:24.894620 kubelet[2737]: E1212 17:41:24.894601 2737 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa8a1e32b5a5cfe1c2e3fbb35613f8b7630ffefdb2d56d770f8b3dcf73132d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpm8r" Dec 12 17:41:24.894683 kubelet[2737]: E1212 17:41:24.894653 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpm8r_calico-system(e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpm8r_calico-system(e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fa8a1e32b5a5cfe1c2e3fbb35613f8b7630ffefdb2d56d770f8b3dcf73132d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:24.964490 kubelet[2737]: E1212 17:41:24.964450 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:24.965501 containerd[1582]: time="2025-12-12T17:41:24.965467860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:41:25.507570 systemd[1]: run-netns-cni\x2d0d89e190\x2d0bd0\x2d1363\x2d4e9b\x2def6b3975c254.mount: Deactivated successfully. Dec 12 17:41:25.507673 systemd[1]: run-netns-cni\x2d010860d7\x2dc0a1\x2d9476\x2d84c2\x2d9b624207d0ab.mount: Deactivated successfully. Dec 12 17:41:25.507719 systemd[1]: run-netns-cni\x2d9a3a99ad\x2d0229\x2d3105\x2ddeeb\x2d1ae190e5906c.mount: Deactivated successfully. Dec 12 17:41:25.507760 systemd[1]: run-netns-cni\x2dd5486556\x2d36a5\x2d238a\x2d49fb\x2dd576ee920e0b.mount: Deactivated successfully. Dec 12 17:41:28.801772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116545967.mount: Deactivated successfully. Dec 12 17:41:28.900722 containerd[1582]: time="2025-12-12T17:41:28.900133637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:28.901096 containerd[1582]: time="2025-12-12T17:41:28.900850569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:41:28.901707 containerd[1582]: time="2025-12-12T17:41:28.901679408Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:28.903638 containerd[1582]: time="2025-12-12T17:41:28.903582065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:28.904111 containerd[1582]: time="2025-12-12T17:41:28.904073382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.938495012s" Dec 12 17:41:28.904163 containerd[1582]: time="2025-12-12T17:41:28.904112952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:41:28.920916 containerd[1582]: time="2025-12-12T17:41:28.920857250Z" level=info msg="CreateContainer within sandbox \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:41:28.928547 containerd[1582]: time="2025-12-12T17:41:28.927425666Z" level=info msg="Container a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:28.951704 containerd[1582]: time="2025-12-12T17:41:28.951653960Z" level=info msg="CreateContainer within sandbox \"bfa6c03f87e9ff7e8ab844279403eb09f72afc60288722a399b66ba8994f4f50\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535\"" Dec 12 17:41:28.952556 containerd[1582]: time="2025-12-12T17:41:28.952523448Z" level=info msg="StartContainer for \"a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535\"" Dec 12 17:41:28.954167 containerd[1582]: time="2025-12-12T17:41:28.954138796Z" level=info msg="connecting to shim a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535" address="unix:///run/containerd/s/ce4d8a912458221c97f9d3de59d5e86b9218b5642973bd9967da936e2c7a3dd8" protocol=ttrpc version=3 Dec 12 17:41:28.977043 systemd[1]: Started cri-containerd-a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535.scope - libcontainer container a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535. Dec 12 17:41:29.061000 audit: BPF prog-id=170 op=LOAD Dec 12 17:41:29.064687 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 17:41:29.064766 kernel: audit: type=1334 audit(1765561289.061:559): prog-id=170 op=LOAD Dec 12 17:41:29.064789 kernel: audit: type=1300 audit(1765561289.061:559): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.061000 audit[3829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.071017 kernel: audit: type=1327 audit(1765561289.061:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.071074 kernel: audit: type=1334 audit(1765561289.062:560): prog-id=171 op=LOAD Dec 12 17:41:29.062000 audit: BPF prog-id=171 op=LOAD Dec 12 17:41:29.062000 audit[3829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.074898 kernel: audit: type=1300 audit(1765561289.062:560): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.074997 kernel: audit: type=1327 audit(1765561289.062:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.062000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:41:29.078818 kernel: audit: type=1334 audit(1765561289.062:561): prog-id=171 op=UNLOAD Dec 12 17:41:29.062000 audit[3829]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.082011 kernel: audit: type=1300 audit(1765561289.062:561): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.085307 kernel: audit: type=1327 audit(1765561289.062:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.085368 kernel: audit: type=1334 audit(1765561289.062:562): prog-id=170 op=UNLOAD Dec 12 17:41:29.062000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:41:29.062000 audit[3829]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.062000 audit: BPF prog-id=172 op=LOAD Dec 12 17:41:29.062000 audit[3829]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3237 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:29.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663363626633656634656266363261616130353835653963306664 Dec 12 17:41:29.116018 containerd[1582]: time="2025-12-12T17:41:29.115889818Z" level=info msg="StartContainer for \"a0f3cbf3ef4ebf62aaa0585e9c0fd3cb0bca97d82364391a496e2fbd66c58535\" returns successfully" Dec 12 17:41:29.221264 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:41:29.221386 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:41:29.456378 kubelet[2737]: I1212 17:41:29.455986 2737 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-backend-key-pair\") pod \"a763186a-19f9-4afb-a531-5c752f0913e7\" (UID: \"a763186a-19f9-4afb-a531-5c752f0913e7\") " Dec 12 17:41:29.456378 kubelet[2737]: I1212 17:41:29.456040 2737 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfvz\" (UniqueName: \"kubernetes.io/projected/a763186a-19f9-4afb-a531-5c752f0913e7-kube-api-access-nsfvz\") pod \"a763186a-19f9-4afb-a531-5c752f0913e7\" (UID: \"a763186a-19f9-4afb-a531-5c752f0913e7\") " Dec 12 17:41:29.456378 kubelet[2737]: I1212 17:41:29.456081 2737 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-ca-bundle\") pod \"a763186a-19f9-4afb-a531-5c752f0913e7\" (UID: \"a763186a-19f9-4afb-a531-5c752f0913e7\") " Dec 12 17:41:29.477156 kubelet[2737]: I1212 17:41:29.477109 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a763186a-19f9-4afb-a531-5c752f0913e7" (UID: "a763186a-19f9-4afb-a531-5c752f0913e7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:41:29.480012 kubelet[2737]: I1212 17:41:29.479965 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a763186a-19f9-4afb-a531-5c752f0913e7" (UID: "a763186a-19f9-4afb-a531-5c752f0913e7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:41:29.480534 kubelet[2737]: I1212 17:41:29.480483 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a763186a-19f9-4afb-a531-5c752f0913e7-kube-api-access-nsfvz" (OuterVolumeSpecName: "kube-api-access-nsfvz") pod "a763186a-19f9-4afb-a531-5c752f0913e7" (UID: "a763186a-19f9-4afb-a531-5c752f0913e7"). InnerVolumeSpecName "kube-api-access-nsfvz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:41:29.557277 kubelet[2737]: I1212 17:41:29.557237 2737 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:41:29.557277 kubelet[2737]: I1212 17:41:29.557272 2737 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a763186a-19f9-4afb-a531-5c752f0913e7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:41:29.557277 kubelet[2737]: I1212 17:41:29.557281 2737 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsfvz\" (UniqueName: \"kubernetes.io/projected/a763186a-19f9-4afb-a531-5c752f0913e7-kube-api-access-nsfvz\") on node \"localhost\" DevicePath \"\"" Dec 12 17:41:29.802516 systemd[1]: var-lib-kubelet-pods-a763186a\x2d19f9\x2d4afb\x2da531\x2d5c752f0913e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsfvz.mount: Deactivated successfully. Dec 12 17:41:29.802613 systemd[1]: var-lib-kubelet-pods-a763186a\x2d19f9\x2d4afb\x2da531\x2d5c752f0913e7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:41:29.986636 kubelet[2737]: E1212 17:41:29.986604 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:29.990865 systemd[1]: Removed slice kubepods-besteffort-poda763186a_19f9_4afb_a531_5c752f0913e7.slice - libcontainer container kubepods-besteffort-poda763186a_19f9_4afb_a531_5c752f0913e7.slice. Dec 12 17:41:30.015021 kubelet[2737]: I1212 17:41:30.014939 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5kscz" podStartSLOduration=2.075719302 podStartE2EDuration="14.00603101s" podCreationTimestamp="2025-12-12 17:41:16 +0000 UTC" firstStartedPulling="2025-12-12 17:41:16.974687177 +0000 UTC m=+22.243650269" lastFinishedPulling="2025-12-12 17:41:28.904998885 +0000 UTC m=+34.173961977" observedRunningTime="2025-12-12 17:41:30.004887314 +0000 UTC m=+35.273850406" watchObservedRunningTime="2025-12-12 17:41:30.00603101 +0000 UTC m=+35.274994102" Dec 12 17:41:30.068675 systemd[1]: Created slice kubepods-besteffort-podcc4e0f8f_eaf8_4fb6_aa42_5841d9a7072a.slice - libcontainer container kubepods-besteffort-podcc4e0f8f_eaf8_4fb6_aa42_5841d9a7072a.slice. Dec 12 17:41:30.161389 kubelet[2737]: I1212 17:41:30.161328 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44m48\" (UniqueName: \"kubernetes.io/projected/cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a-kube-api-access-44m48\") pod \"whisker-75798fb48-m5jp4\" (UID: \"cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a\") " pod="calico-system/whisker-75798fb48-m5jp4" Dec 12 17:41:30.161535 kubelet[2737]: I1212 17:41:30.161441 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a-whisker-ca-bundle\") pod \"whisker-75798fb48-m5jp4\" (UID: \"cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a\") " pod="calico-system/whisker-75798fb48-m5jp4" Dec 12 17:41:30.161535 kubelet[2737]: I1212 17:41:30.161485 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a-whisker-backend-key-pair\") pod \"whisker-75798fb48-m5jp4\" (UID: \"cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a\") " pod="calico-system/whisker-75798fb48-m5jp4" Dec 12 17:41:30.377542 containerd[1582]: time="2025-12-12T17:41:30.377447501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75798fb48-m5jp4,Uid:cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:30.606163 systemd-networkd[1490]: cali4a70630991a: Link UP Dec 12 17:41:30.608931 systemd-networkd[1490]: cali4a70630991a: Gained carrier Dec 12 17:41:30.628497 containerd[1582]: 2025-12-12 17:41:30.400 [INFO][3895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:41:30.628497 containerd[1582]: 2025-12-12 17:41:30.440 [INFO][3895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--75798fb48--m5jp4-eth0 whisker-75798fb48- calico-system cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a 920 0 2025-12-12 17:41:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75798fb48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-75798fb48-m5jp4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4a70630991a [] [] }} ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-" Dec 12 17:41:30.628497 containerd[1582]: 2025-12-12 17:41:30.440 [INFO][3895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.628497 containerd[1582]: 2025-12-12 17:41:30.536 [INFO][3910] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" HandleID="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Workload="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.536 [INFO][3910] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" HandleID="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Workload="localhost-k8s-whisker--75798fb48--m5jp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136f90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-75798fb48-m5jp4", "timestamp":"2025-12-12 17:41:30.536060179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.536 [INFO][3910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.536 [INFO][3910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.537 [INFO][3910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.554 [INFO][3910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" host="localhost" Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.562 [INFO][3910] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.567 [INFO][3910] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.574 [INFO][3910] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.577 [INFO][3910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:30.628731 containerd[1582]: 2025-12-12 17:41:30.577 [INFO][3910] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" host="localhost" Dec 12 17:41:30.628966 containerd[1582]: 2025-12-12 17:41:30.579 [INFO][3910] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5 Dec 12 17:41:30.628966 containerd[1582]: 2025-12-12 17:41:30.584 [INFO][3910] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" host="localhost" Dec 12 17:41:30.628966 containerd[1582]: 2025-12-12 17:41:30.589 [INFO][3910] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" host="localhost" Dec 12 17:41:30.628966 containerd[1582]: 2025-12-12 17:41:30.589 [INFO][3910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" host="localhost" Dec 12 17:41:30.628966 containerd[1582]: 2025-12-12 17:41:30.589 [INFO][3910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:30.628966 containerd[1582]: 2025-12-12 17:41:30.589 [INFO][3910] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" HandleID="k8s-pod-network.0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Workload="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.630899 containerd[1582]: 2025-12-12 17:41:30.595 [INFO][3895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75798fb48--m5jp4-eth0", GenerateName:"whisker-75798fb48-", Namespace:"calico-system", SelfLink:"", UID:"cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75798fb48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-75798fb48-m5jp4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a70630991a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:30.630899 containerd[1582]: 2025-12-12 17:41:30.596 [INFO][3895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.630993 containerd[1582]: 2025-12-12 17:41:30.596 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a70630991a ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.630993 containerd[1582]: 2025-12-12 17:41:30.607 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.631034 containerd[1582]: 2025-12-12 17:41:30.607 [INFO][3895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75798fb48--m5jp4-eth0", GenerateName:"whisker-75798fb48-", Namespace:"calico-system", SelfLink:"", UID:"cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75798fb48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5", Pod:"whisker-75798fb48-m5jp4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a70630991a", MAC:"8e:cb:12:0b:e7:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:30.631084 containerd[1582]: 2025-12-12 17:41:30.623 [INFO][3895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" Namespace="calico-system" Pod="whisker-75798fb48-m5jp4" WorkloadEndpoint="localhost-k8s-whisker--75798fb48--m5jp4-eth0" Dec 12 17:41:30.771978 containerd[1582]: time="2025-12-12T17:41:30.771926037Z" level=info msg="connecting to shim 0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5" address="unix:///run/containerd/s/fdb4a6fe14511555977dd0af9c6a75842115111e28ec3ab06d20c5996a39c959" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:30.786000 audit: BPF prog-id=173 op=LOAD Dec 12 17:41:30.786000 audit[4058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1ffe338 a2=98 a3=fffff1ffe328 items=0 ppid=3955 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:41:30.786000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:41:30.786000 audit[4058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff1ffe308 a3=0 items=0 ppid=3955 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:41:30.786000 audit: BPF prog-id=174 op=LOAD Dec 12 17:41:30.786000 audit[4058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1ffe1e8 a2=74 a3=95 items=0 ppid=3955 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:41:30.786000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:41:30.786000 audit[4058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3955 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:41:30.786000 audit: BPF prog-id=175 op=LOAD Dec 12 17:41:30.786000 audit[4058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1ffe218 a2=40 a3=fffff1ffe248 items=0 ppid=3955 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:41:30.787000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:41:30.787000 audit[4058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff1ffe248 items=0 ppid=3955 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.787000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:41:30.789000 audit: BPF prog-id=176 op=LOAD Dec 12 17:41:30.789000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0400c68 a2=98 a3=ffffc0400c58 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.789000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.790000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:41:30.790000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc0400c38 a3=0 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.790000 audit: BPF prog-id=177 op=LOAD Dec 12 17:41:30.790000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc04008f8 a2=74 a3=95 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.790000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:41:30.790000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.790000 audit: BPF prog-id=178 op=LOAD Dec 12 17:41:30.790000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0400958 a2=94 a3=2 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.790000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:41:30.790000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.807994 systemd[1]: Started cri-containerd-0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5.scope - libcontainer container 0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5. Dec 12 17:41:30.825000 audit: BPF prog-id=179 op=LOAD Dec 12 17:41:30.827000 audit: BPF prog-id=180 op=LOAD Dec 12 17:41:30.827000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.827000 audit: BPF prog-id=180 op=UNLOAD Dec 12 17:41:30.827000 audit[4050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.828000 audit: BPF prog-id=181 op=LOAD Dec 12 17:41:30.828000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.828000 audit: BPF prog-id=182 op=LOAD Dec 12 17:41:30.828000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.828000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:41:30.828000 audit[4050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.828000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:41:30.828000 audit[4050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.828000 audit: BPF prog-id=183 op=LOAD Dec 12 17:41:30.828000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4037 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032353261646362316235326336633762633034353130363164363231 Dec 12 17:41:30.830197 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:30.844200 kubelet[2737]: I1212 17:41:30.844156 2737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a763186a-19f9-4afb-a531-5c752f0913e7" path="/var/lib/kubelet/pods/a763186a-19f9-4afb-a531-5c752f0913e7/volumes" Dec 12 17:41:30.864468 containerd[1582]: time="2025-12-12T17:41:30.864406346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75798fb48-m5jp4,Uid:cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0252adcb1b52c6c7bc0451061d621ca87a5632a643c2210cb594a6f708becda5\"" Dec 12 17:41:30.866982 containerd[1582]: time="2025-12-12T17:41:30.866954156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:41:30.909000 audit: BPF prog-id=184 op=LOAD Dec 12 17:41:30.909000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0400918 a2=40 a3=ffffc0400948 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.909000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:41:30.909000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc0400948 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.918000 audit: BPF prog-id=185 op=LOAD Dec 12 17:41:30.918000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0400928 a2=94 a3=4 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.918000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.918000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:41:30.918000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.918000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=186 op=LOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0400768 a2=94 a3=5 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=187 op=LOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0400998 a2=94 a3=6 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=188 op=LOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0400168 a2=94 a3=83 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=189 op=LOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc03fff28 a2=94 a3=2 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.919000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:41:30.919000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.920000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:41:30.920000 audit[4059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3e6ed620 a3=3e6e0b00 items=0 ppid=3955 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:41:30.929000 audit: BPF prog-id=190 op=LOAD Dec 12 17:41:30.929000 audit[4101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1808138 a2=98 a3=ffffc1808128 items=0 ppid=3955 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:41:30.929000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:41:30.929000 audit[4101]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc1808108 a3=0 items=0 ppid=3955 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:41:30.929000 audit: BPF prog-id=191 op=LOAD Dec 12 17:41:30.929000 audit[4101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1807fe8 a2=74 a3=95 items=0 ppid=3955 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:41:30.929000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:41:30.929000 audit[4101]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3955 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:41:30.929000 audit: BPF prog-id=192 op=LOAD Dec 12 17:41:30.929000 audit[4101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1808018 a2=40 a3=ffffc1808048 items=0 ppid=3955 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:41:30.929000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:41:30.929000 audit[4101]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc1808048 items=0 ppid=3955 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:30.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:41:30.989205 systemd-networkd[1490]: vxlan.calico: Link UP Dec 12 17:41:30.989211 systemd-networkd[1490]: vxlan.calico: Gained carrier Dec 12 17:41:30.991561 kubelet[2737]: I1212 17:41:30.991535 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:41:30.992481 kubelet[2737]: E1212 17:41:30.992446 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:31.007000 audit: BPF prog-id=193 op=LOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2d36448 a2=98 a3=ffffc2d36438 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc2d36418 a3=0 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=194 op=LOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2d36128 a2=74 a3=95 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=195 op=LOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2d36188 a2=94 a3=2 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=196 op=LOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2d36008 a2=40 a3=ffffc2d36038 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc2d36038 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=197 op=LOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2d36158 a2=94 a3=b7 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.007000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:41:31.007000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.009000 audit: BPF prog-id=198 op=LOAD Dec 12 17:41:31.009000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2d35808 a2=94 a3=2 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.009000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.009000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:41:31.009000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.009000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.009000 audit: BPF prog-id=199 op=LOAD Dec 12 17:41:31.009000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2d35998 a2=94 a3=30 items=0 ppid=3955 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.009000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:41:31.015000 audit: BPF prog-id=200 op=LOAD Dec 12 17:41:31.015000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffebc4c88 a2=98 a3=fffffebc4c78 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.015000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:41:31.015000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffebc4c58 a3=0 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.015000 audit: BPF prog-id=201 op=LOAD Dec 12 17:41:31.015000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffebc4918 a2=74 a3=95 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.015000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:41:31.015000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.015000 audit: BPF prog-id=202 op=LOAD Dec 12 17:41:31.015000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffebc4978 a2=94 a3=2 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.015000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:41:31.015000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.062020 containerd[1582]: time="2025-12-12T17:41:31.061940415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:31.063148 containerd[1582]: time="2025-12-12T17:41:31.063112549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:41:31.063227 containerd[1582]: time="2025-12-12T17:41:31.063173322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:31.063520 kubelet[2737]: E1212 17:41:31.063411 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:41:31.065594 kubelet[2737]: E1212 17:41:31.065551 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:41:31.070315 kubelet[2737]: E1212 17:41:31.069674 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-75798fb48-m5jp4_calico-system(cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:31.071713 containerd[1582]: time="2025-12-12T17:41:31.071442914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:41:31.119000 audit: BPF prog-id=203 op=LOAD Dec 12 17:41:31.119000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffebc4938 a2=40 a3=fffffebc4968 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.119000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.119000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:41:31.119000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffebc4968 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.119000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=204 op=LOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffebc4948 a2=94 a3=4 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=205 op=LOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffebc4788 a2=94 a3=5 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=206 op=LOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffebc49b8 a2=94 a3=6 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.129000 audit: BPF prog-id=207 op=LOAD Dec 12 17:41:31.129000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffebc4188 a2=94 a3=83 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.130000 audit: BPF prog-id=208 op=LOAD Dec 12 17:41:31.130000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffebc3f48 a2=94 a3=2 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.130000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:41:31.130000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.130000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:41:31.130000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=12fdc620 a3=12fcfb00 items=0 ppid=3955 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:41:31.144000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:41:31.144000 audit[3955]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000d5bb40 a2=0 a3=0 items=0 ppid=3918 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.144000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:41:31.189000 audit[4157]: NETFILTER_CFG table=nat:117 family=2 entries=15 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:31.189000 audit[4157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffeaf1e70 a2=0 a3=ffff98817fa8 items=0 ppid=3955 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.189000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:31.190000 audit[4158]: NETFILTER_CFG table=mangle:118 family=2 entries=16 op=nft_register_chain pid=4158 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:31.190000 audit[4158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe60cee40 a2=0 a3=ffff94f94fa8 items=0 ppid=3955 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.190000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:31.195000 audit[4156]: NETFILTER_CFG table=raw:119 family=2 entries=21 op=nft_register_chain pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:31.195000 audit[4156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff3aa8e90 a2=0 a3=ffff98260fa8 items=0 ppid=3955 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.195000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:31.201000 audit[4161]: NETFILTER_CFG table=filter:120 family=2 entries=94 op=nft_register_chain pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:31.201000 audit[4161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffffa94cc60 a2=0 a3=ffff85810fa8 items=0 ppid=3955 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:31.201000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:31.282735 containerd[1582]: time="2025-12-12T17:41:31.282679960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:31.296819 containerd[1582]: time="2025-12-12T17:41:31.296742167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:41:31.296903 containerd[1582]: time="2025-12-12T17:41:31.296815742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:31.297068 kubelet[2737]: E1212 17:41:31.297029 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:41:31.297115 kubelet[2737]: E1212 17:41:31.297079 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:41:31.297178 kubelet[2737]: E1212 17:41:31.297156 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-75798fb48-m5jp4_calico-system(cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:31.297230 kubelet[2737]: E1212 17:41:31.297202 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75798fb48-m5jp4" podUID="cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a" Dec 12 17:41:31.700456 systemd-networkd[1490]: cali4a70630991a: Gained IPv6LL Dec 12 17:41:31.995256 kubelet[2737]: E1212 17:41:31.995114 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75798fb48-m5jp4" podUID="cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a" Dec 12 17:41:32.018000 audit[4173]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:32.018000 audit[4173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff22bb110 a2=0 a3=1 items=0 ppid=2891 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:32.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:32.031000 audit[4173]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:32.031000 audit[4173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff22bb110 a2=0 a3=1 items=0 ppid=2891 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:32.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:32.339078 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL Dec 12 17:41:33.157549 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:52986.service - OpenSSH per-connection server daemon (10.0.0.1:52986). Dec 12 17:41:33.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.131:22-10.0.0.1:52986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:33.219000 audit[4180]: USER_ACCT pid=4180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:33.220714 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 52986 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:33.221000 audit[4180]: CRED_ACQ pid=4180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:33.221000 audit[4180]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd506e20 a2=3 a3=0 items=0 ppid=1 pid=4180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:33.221000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:33.222385 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:33.227896 systemd-logind[1556]: New session 8 of user core. Dec 12 17:41:33.234017 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:41:33.235000 audit[4180]: USER_START pid=4180 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:33.237000 audit[4183]: CRED_ACQ pid=4183 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:33.393008 sshd[4183]: Connection closed by 10.0.0.1 port 52986 Dec 12 17:41:33.393395 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:33.394000 audit[4180]: USER_END pid=4180 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:33.395000 audit[4180]: CRED_DISP pid=4180 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:33.398325 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:52986.service: Deactivated successfully. Dec 12 17:41:33.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.131:22-10.0.0.1:52986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:33.400103 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:41:33.400888 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:41:33.401755 systemd-logind[1556]: Removed session 8. Dec 12 17:41:35.860736 containerd[1582]: time="2025-12-12T17:41:35.860693427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-pcmpc,Uid:59335d98-4d93-4df2-bc4b-c4c82b6bcd24,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:35.862865 containerd[1582]: time="2025-12-12T17:41:35.862836918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f8b9c58f-6dld8,Uid:8b64634c-702a-4400-b949-31628cf96118,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:35.989619 systemd-networkd[1490]: cali953388089dc: Link UP Dec 12 17:41:35.990885 systemd-networkd[1490]: cali953388089dc: Gained carrier Dec 12 17:41:36.006765 containerd[1582]: 2025-12-12 17:41:35.909 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0 calico-kube-controllers-64f8b9c58f- calico-system 8b64634c-702a-4400-b949-31628cf96118 844 0 2025-12-12 17:41:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64f8b9c58f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64f8b9c58f-6dld8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali953388089dc [] [] }} ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-" Dec 12 17:41:36.006765 containerd[1582]: 2025-12-12 17:41:35.909 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.006765 containerd[1582]: 2025-12-12 17:41:35.944 [INFO][4228] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" HandleID="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Workload="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.944 [INFO][4228] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" HandleID="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Workload="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64f8b9c58f-6dld8", "timestamp":"2025-12-12 17:41:35.944298946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.944 [INFO][4228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.944 [INFO][4228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.944 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.955 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" host="localhost" Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.960 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.965 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.966 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.969 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:36.007064 containerd[1582]: 2025-12-12 17:41:35.969 [INFO][4228] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" host="localhost" Dec 12 17:41:36.007299 containerd[1582]: 2025-12-12 17:41:35.970 [INFO][4228] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149 Dec 12 17:41:36.007299 containerd[1582]: 2025-12-12 17:41:35.974 [INFO][4228] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" host="localhost" Dec 12 17:41:36.007299 containerd[1582]: 2025-12-12 17:41:35.980 [INFO][4228] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" host="localhost" Dec 12 17:41:36.007299 containerd[1582]: 2025-12-12 17:41:35.980 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" host="localhost" Dec 12 17:41:36.007299 containerd[1582]: 2025-12-12 17:41:35.980 [INFO][4228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:36.007299 containerd[1582]: 2025-12-12 17:41:35.980 [INFO][4228] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" HandleID="k8s-pod-network.2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Workload="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.007424 containerd[1582]: 2025-12-12 17:41:35.985 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0", GenerateName:"calico-kube-controllers-64f8b9c58f-", Namespace:"calico-system", SelfLink:"", UID:"8b64634c-702a-4400-b949-31628cf96118", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f8b9c58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64f8b9c58f-6dld8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali953388089dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:36.007475 containerd[1582]: 2025-12-12 17:41:35.985 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.007475 containerd[1582]: 2025-12-12 17:41:35.985 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali953388089dc ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.007475 containerd[1582]: 2025-12-12 17:41:35.990 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.007538 containerd[1582]: 2025-12-12 17:41:35.991 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0", GenerateName:"calico-kube-controllers-64f8b9c58f-", Namespace:"calico-system", SelfLink:"", UID:"8b64634c-702a-4400-b949-31628cf96118", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f8b9c58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149", Pod:"calico-kube-controllers-64f8b9c58f-6dld8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali953388089dc", MAC:"f6:e0:39:37:c0:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:36.007599 containerd[1582]: 2025-12-12 17:41:36.002 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" Namespace="calico-system" Pod="calico-kube-controllers-64f8b9c58f-6dld8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f8b9c58f--6dld8-eth0" Dec 12 17:41:36.021761 kernel: kauditd_printk_skb: 242 callbacks suppressed Dec 12 17:41:36.021870 kernel: audit: type=1325 audit(1765561296.019:649): table=filter:123 family=2 entries=36 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:36.019000 audit[4254]: NETFILTER_CFG table=filter:123 family=2 entries=36 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:36.019000 audit[4254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff79b39d0 a2=0 a3=ffff84f43fa8 items=0 ppid=3955 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.028062 kernel: audit: type=1300 audit(1765561296.019:649): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff79b39d0 a2=0 a3=ffff84f43fa8 items=0 ppid=3955 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.019000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:36.028797 containerd[1582]: time="2025-12-12T17:41:36.028760682Z" level=info msg="connecting to shim 2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149" address="unix:///run/containerd/s/9508c717d39779c993a26d161660e93dc82dc71635f9fb7824393aada9000927" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:36.031057 kernel: audit: type=1327 audit(1765561296.019:649): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:36.057088 systemd[1]: Started cri-containerd-2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149.scope - libcontainer container 2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149. Dec 12 17:41:36.068000 audit: BPF prog-id=209 op=LOAD Dec 12 17:41:36.070825 kernel: audit: type=1334 audit(1765561296.068:650): prog-id=209 op=LOAD Dec 12 17:41:36.070000 audit: BPF prog-id=210 op=LOAD Dec 12 17:41:36.070000 audit[4273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.076933 kernel: audit: type=1334 audit(1765561296.070:651): prog-id=210 op=LOAD Dec 12 17:41:36.077046 kernel: audit: type=1300 audit(1765561296.070:651): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.077056 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.081178 kernel: audit: type=1327 audit(1765561296.070:651): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.070000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:41:36.082136 kernel: audit: type=1334 audit(1765561296.070:652): prog-id=210 op=UNLOAD Dec 12 17:41:36.082353 kernel: audit: type=1300 audit(1765561296.070:652): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.070000 audit[4273]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.085789 kernel: audit: type=1327 audit(1765561296.070:652): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.070000 audit: BPF prog-id=211 op=LOAD Dec 12 17:41:36.070000 audit[4273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.070000 audit: BPF prog-id=212 op=LOAD Dec 12 17:41:36.070000 audit[4273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.070000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:41:36.070000 audit[4273]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.071000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:41:36.071000 audit[4273]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.071000 audit: BPF prog-id=213 op=LOAD Dec 12 17:41:36.071000 audit[4273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4262 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264663238376333656238333338376366316138346439663035326137 Dec 12 17:41:36.109549 systemd-networkd[1490]: cali65b1fe8ed83: Link UP Dec 12 17:41:36.110664 systemd-networkd[1490]: cali65b1fe8ed83: Gained carrier Dec 12 17:41:36.123641 containerd[1582]: time="2025-12-12T17:41:36.123604657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f8b9c58f-6dld8,Uid:8b64634c-702a-4400-b949-31628cf96118,Namespace:calico-system,Attempt:0,} returns sandbox id \"2df287c3eb83387cf1a84d9f052a7287c0c1340989188cca5b418ebfeda1a149\"" Dec 12 17:41:36.127521 containerd[1582]: time="2025-12-12T17:41:36.126928877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:41:36.132506 containerd[1582]: 2025-12-12 17:41:35.916 [INFO][4199] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0 calico-apiserver-8ff6bd4cf- calico-apiserver 59335d98-4d93-4df2-bc4b-c4c82b6bcd24 848 0 2025-12-12 17:41:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8ff6bd4cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8ff6bd4cf-pcmpc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65b1fe8ed83 [] [] }} ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-" Dec 12 17:41:36.132506 containerd[1582]: 2025-12-12 17:41:35.916 [INFO][4199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.132506 containerd[1582]: 2025-12-12 17:41:35.949 [INFO][4234] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" HandleID="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Workload="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:35.950 [INFO][4234] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" HandleID="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Workload="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8ff6bd4cf-pcmpc", "timestamp":"2025-12-12 17:41:35.949983476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:35.950 [INFO][4234] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:35.980 [INFO][4234] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:35.980 [INFO][4234] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:36.056 [INFO][4234] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" host="localhost" Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:36.063 [INFO][4234] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:36.068 [INFO][4234] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:36.077 [INFO][4234] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:36.091 [INFO][4234] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:36.132703 containerd[1582]: 2025-12-12 17:41:36.091 [INFO][4234] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" host="localhost" Dec 12 17:41:36.132985 containerd[1582]: 2025-12-12 17:41:36.093 [INFO][4234] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7 Dec 12 17:41:36.132985 containerd[1582]: 2025-12-12 17:41:36.097 [INFO][4234] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" host="localhost" Dec 12 17:41:36.132985 containerd[1582]: 2025-12-12 17:41:36.103 [INFO][4234] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" host="localhost" Dec 12 17:41:36.132985 containerd[1582]: 2025-12-12 17:41:36.103 [INFO][4234] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" host="localhost" Dec 12 17:41:36.132985 containerd[1582]: 2025-12-12 17:41:36.103 [INFO][4234] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:36.132985 containerd[1582]: 2025-12-12 17:41:36.103 [INFO][4234] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" HandleID="k8s-pod-network.8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Workload="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.133099 containerd[1582]: 2025-12-12 17:41:36.106 [INFO][4199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0", GenerateName:"calico-apiserver-8ff6bd4cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"59335d98-4d93-4df2-bc4b-c4c82b6bcd24", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8ff6bd4cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8ff6bd4cf-pcmpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b1fe8ed83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:36.133203 containerd[1582]: 2025-12-12 17:41:36.106 [INFO][4199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.133203 containerd[1582]: 2025-12-12 17:41:36.106 [INFO][4199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65b1fe8ed83 ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.133203 containerd[1582]: 2025-12-12 17:41:36.111 [INFO][4199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.133269 containerd[1582]: 2025-12-12 17:41:36.113 [INFO][4199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0", GenerateName:"calico-apiserver-8ff6bd4cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"59335d98-4d93-4df2-bc4b-c4c82b6bcd24", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8ff6bd4cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7", Pod:"calico-apiserver-8ff6bd4cf-pcmpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b1fe8ed83", MAC:"8a:85:46:1e:57:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:36.133329 containerd[1582]: 2025-12-12 17:41:36.128 [INFO][4199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-pcmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--pcmpc-eth0" Dec 12 17:41:36.142000 audit[4306]: NETFILTER_CFG table=filter:124 family=2 entries=54 op=nft_register_chain pid=4306 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:36.142000 audit[4306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffddc9d9c0 a2=0 a3=ffffae5fefa8 items=0 ppid=3955 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.142000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:36.154505 containerd[1582]: time="2025-12-12T17:41:36.154463334Z" level=info msg="connecting to shim 8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7" address="unix:///run/containerd/s/5f81500dddacb34c65ff243006a7a1efc35748df1692ba65e9f8f73cd6e965cb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:36.183052 systemd[1]: Started cri-containerd-8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7.scope - libcontainer container 8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7. Dec 12 17:41:36.193000 audit: BPF prog-id=214 op=LOAD Dec 12 17:41:36.193000 audit: BPF prog-id=215 op=LOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.193000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.193000 audit: BPF prog-id=216 op=LOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.193000 audit: BPF prog-id=217 op=LOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.193000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.193000 audit: BPF prog-id=216 op=UNLOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.193000 audit: BPF prog-id=218 op=LOAD Dec 12 17:41:36.193000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4315 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:36.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863303664356563656433333134643235313939636637363139313530 Dec 12 17:41:36.195297 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:36.222711 containerd[1582]: time="2025-12-12T17:41:36.222593246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-pcmpc,Uid:59335d98-4d93-4df2-bc4b-c4c82b6bcd24,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8c06d5eced3314d25199cf761915046b6423429cf37f26c763ac1a19593185f7\"" Dec 12 17:41:36.346585 containerd[1582]: time="2025-12-12T17:41:36.346488761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:36.347764 containerd[1582]: time="2025-12-12T17:41:36.347724352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:41:36.347867 containerd[1582]: time="2025-12-12T17:41:36.347811888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:36.348006 kubelet[2737]: E1212 17:41:36.347972 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:41:36.348321 kubelet[2737]: E1212 17:41:36.348015 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:41:36.348321 kubelet[2737]: E1212 17:41:36.348195 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-64f8b9c58f-6dld8_calico-system(8b64634c-702a-4400-b949-31628cf96118): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:36.348321 kubelet[2737]: E1212 17:41:36.348245 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" podUID="8b64634c-702a-4400-b949-31628cf96118" Dec 12 17:41:36.348532 containerd[1582]: time="2025-12-12T17:41:36.348407679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:41:36.561683 containerd[1582]: time="2025-12-12T17:41:36.561580052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:36.562784 containerd[1582]: time="2025-12-12T17:41:36.562741268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:41:36.562892 containerd[1582]: time="2025-12-12T17:41:36.562853609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:36.563147 kubelet[2737]: E1212 17:41:36.563103 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:36.563317 kubelet[2737]: E1212 17:41:36.563257 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:36.563564 kubelet[2737]: E1212 17:41:36.563435 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8ff6bd4cf-pcmpc_calico-apiserver(59335d98-4d93-4df2-bc4b-c4c82b6bcd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:36.563682 kubelet[2737]: E1212 17:41:36.563516 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" podUID="59335d98-4d93-4df2-bc4b-c4c82b6bcd24" Dec 12 17:41:37.008385 kubelet[2737]: E1212 17:41:37.008336 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" podUID="59335d98-4d93-4df2-bc4b-c4c82b6bcd24" Dec 12 17:41:37.010467 kubelet[2737]: E1212 17:41:37.010390 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" podUID="8b64634c-702a-4400-b949-31628cf96118" Dec 12 17:41:37.030000 audit[4359]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:37.030000 audit[4359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe910db10 a2=0 a3=1 items=0 ppid=2891 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:37.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:37.034000 audit[4359]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:37.034000 audit[4359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe910db10 a2=0 a3=1 items=0 ppid=2891 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:37.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:37.267008 systemd-networkd[1490]: cali953388089dc: Gained IPv6LL Dec 12 17:41:37.778984 systemd-networkd[1490]: cali65b1fe8ed83: Gained IPv6LL Dec 12 17:41:37.842118 containerd[1582]: time="2025-12-12T17:41:37.842075650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-74cxw,Uid:6276624d-bee7-4066-84ff-d2e0529aa160,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:37.843525 kubelet[2737]: E1212 17:41:37.843499 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:37.843927 containerd[1582]: time="2025-12-12T17:41:37.843897541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-79smq,Uid:117d04d2-944b-4a0a-a3e5-7cfa981b4f19,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:37.845520 containerd[1582]: time="2025-12-12T17:41:37.845124724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d47949b7b-87fxs,Uid:0d34868c-1018-4669-b30b-dbcccb35f648,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:37.979431 systemd-networkd[1490]: califd98dc1f31d: Link UP Dec 12 17:41:37.980377 systemd-networkd[1490]: califd98dc1f31d: Gained carrier Dec 12 17:41:37.993393 containerd[1582]: 2025-12-12 17:41:37.889 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--79smq-eth0 coredns-66bc5c9577- kube-system 117d04d2-944b-4a0a-a3e5-7cfa981b4f19 849 0 2025-12-12 17:41:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-79smq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd98dc1f31d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-" Dec 12 17:41:37.993393 containerd[1582]: 2025-12-12 17:41:37.889 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:37.993393 containerd[1582]: 2025-12-12 17:41:37.933 [INFO][4404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" HandleID="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Workload="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.933 [INFO][4404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" HandleID="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Workload="localhost-k8s-coredns--66bc5c9577--79smq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-79smq", "timestamp":"2025-12-12 17:41:37.933532542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.934 [INFO][4404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.934 [INFO][4404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.934 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.944 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" host="localhost" Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.948 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.955 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.957 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.960 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:37.993627 containerd[1582]: 2025-12-12 17:41:37.960 [INFO][4404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" host="localhost" Dec 12 17:41:37.994734 containerd[1582]: 2025-12-12 17:41:37.961 [INFO][4404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84 Dec 12 17:41:37.994734 containerd[1582]: 2025-12-12 17:41:37.965 [INFO][4404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" host="localhost" Dec 12 17:41:37.994734 containerd[1582]: 2025-12-12 17:41:37.970 [INFO][4404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" host="localhost" Dec 12 17:41:37.994734 containerd[1582]: 2025-12-12 17:41:37.970 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" host="localhost" Dec 12 17:41:37.994734 containerd[1582]: 2025-12-12 17:41:37.970 [INFO][4404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:37.994734 containerd[1582]: 2025-12-12 17:41:37.970 [INFO][4404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" HandleID="k8s-pod-network.c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Workload="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:37.995053 containerd[1582]: 2025-12-12 17:41:37.974 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--79smq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"117d04d2-944b-4a0a-a3e5-7cfa981b4f19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-79smq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd98dc1f31d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:37.995053 containerd[1582]: 2025-12-12 17:41:37.977 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:37.995053 containerd[1582]: 2025-12-12 17:41:37.977 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd98dc1f31d ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:37.995053 containerd[1582]: 2025-12-12 17:41:37.980 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:37.995053 containerd[1582]: 2025-12-12 17:41:37.981 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--79smq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"117d04d2-944b-4a0a-a3e5-7cfa981b4f19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84", Pod:"coredns-66bc5c9577-79smq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd98dc1f31d", MAC:"8e:45:4a:ef:51:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:37.995053 containerd[1582]: 2025-12-12 17:41:37.991 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" Namespace="kube-system" Pod="coredns-66bc5c9577-79smq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--79smq-eth0" Dec 12 17:41:38.008000 audit[4436]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4436 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:38.008000 audit[4436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffc96fa330 a2=0 a3=ffff9054ffa8 items=0 ppid=3955 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:38.014569 kubelet[2737]: E1212 17:41:38.014523 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" podUID="8b64634c-702a-4400-b949-31628cf96118" Dec 12 17:41:38.018063 kubelet[2737]: E1212 17:41:38.017663 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" podUID="59335d98-4d93-4df2-bc4b-c4c82b6bcd24" Dec 12 17:41:38.036524 containerd[1582]: time="2025-12-12T17:41:38.036388580Z" level=info msg="connecting to shim c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84" address="unix:///run/containerd/s/fca0d6d138cfba585c98ad9990a50f78594ed87697b53dd1744b51218cffb8d5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:38.068249 systemd[1]: Started cri-containerd-c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84.scope - libcontainer container c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84. Dec 12 17:41:38.084000 audit: BPF prog-id=219 op=LOAD Dec 12 17:41:38.084000 audit: BPF prog-id=220 op=LOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.084000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.084000 audit: BPF prog-id=221 op=LOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.084000 audit: BPF prog-id=222 op=LOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.084000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.084000 audit: BPF prog-id=221 op=UNLOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.084000 audit: BPF prog-id=223 op=LOAD Dec 12 17:41:38.084000 audit[4457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4444 pid=4457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330346535303361336632616636396639623337373932303935653865 Dec 12 17:41:38.088166 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:38.093768 systemd-networkd[1490]: calibaecfe24a41: Link UP Dec 12 17:41:38.095312 systemd-networkd[1490]: calibaecfe24a41: Gained carrier Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.902 [INFO][4366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0 calico-apiserver-8ff6bd4cf- calico-apiserver 6276624d-bee7-4066-84ff-d2e0529aa160 854 0 2025-12-12 17:41:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8ff6bd4cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8ff6bd4cf-74cxw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibaecfe24a41 [] [] }} ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.902 [INFO][4366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.936 [INFO][4405] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" HandleID="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Workload="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.936 [INFO][4405] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" HandleID="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Workload="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8ff6bd4cf-74cxw", "timestamp":"2025-12-12 17:41:37.936759248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.936 [INFO][4405] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.970 [INFO][4405] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:37.971 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.048 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.054 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.061 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.064 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.067 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.067 [INFO][4405] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.069 [INFO][4405] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493 Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.075 [INFO][4405] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.082 [INFO][4405] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.083 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" host="localhost" Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.083 [INFO][4405] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:38.122420 containerd[1582]: 2025-12-12 17:41:38.083 [INFO][4405] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" HandleID="k8s-pod-network.63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Workload="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.122958 containerd[1582]: 2025-12-12 17:41:38.088 [INFO][4366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0", GenerateName:"calico-apiserver-8ff6bd4cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6276624d-bee7-4066-84ff-d2e0529aa160", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8ff6bd4cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8ff6bd4cf-74cxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibaecfe24a41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:38.122958 containerd[1582]: 2025-12-12 17:41:38.088 [INFO][4366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.122958 containerd[1582]: 2025-12-12 17:41:38.089 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibaecfe24a41 ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.122958 containerd[1582]: 2025-12-12 17:41:38.096 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.122958 containerd[1582]: 2025-12-12 17:41:38.100 [INFO][4366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0", GenerateName:"calico-apiserver-8ff6bd4cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6276624d-bee7-4066-84ff-d2e0529aa160", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8ff6bd4cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493", Pod:"calico-apiserver-8ff6bd4cf-74cxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibaecfe24a41", MAC:"ee:3d:8a:b2:c4:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:38.122958 containerd[1582]: 2025-12-12 17:41:38.115 [INFO][4366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" Namespace="calico-apiserver" Pod="calico-apiserver-8ff6bd4cf-74cxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8ff6bd4cf--74cxw-eth0" Dec 12 17:41:38.126399 containerd[1582]: time="2025-12-12T17:41:38.126360986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-79smq,Uid:117d04d2-944b-4a0a-a3e5-7cfa981b4f19,Namespace:kube-system,Attempt:0,} returns sandbox id \"c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84\"" Dec 12 17:41:38.127390 kubelet[2737]: E1212 17:41:38.127359 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:38.134164 containerd[1582]: time="2025-12-12T17:41:38.134124120Z" level=info msg="CreateContainer within sandbox \"c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:41:38.171000 audit[4492]: NETFILTER_CFG table=filter:128 family=2 entries=49 op=nft_register_chain pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:38.171000 audit[4492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffdf4af480 a2=0 a3=ffff92c01fa8 items=0 ppid=3955 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:38.179123 containerd[1582]: time="2025-12-12T17:41:38.179085078Z" level=info msg="Container 1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:38.213989 kubelet[2737]: I1212 17:41:38.213948 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:41:38.216831 kubelet[2737]: E1212 17:41:38.216741 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:38.225483 systemd-networkd[1490]: calie88c67252d2: Link UP Dec 12 17:41:38.225949 containerd[1582]: time="2025-12-12T17:41:38.225906006Z" level=info msg="connecting to shim 63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493" address="unix:///run/containerd/s/2f0edfe2945b24cfd3d7c77c3a680e099a41c50995ff6c4f6125c6ef723a858c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:38.226088 systemd-networkd[1490]: calie88c67252d2: Gained carrier Dec 12 17:41:38.250182 systemd[1]: Started cri-containerd-63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493.scope - libcontainer container 63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493. Dec 12 17:41:38.257757 containerd[1582]: time="2025-12-12T17:41:38.257460111Z" level=info msg="CreateContainer within sandbox \"c04e503a3f2af69f9b37792095e8ecb7beec1ed709f8ef0bc4d9048bea8c4a84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b\"" Dec 12 17:41:38.259086 containerd[1582]: time="2025-12-12T17:41:38.258371032Z" level=info msg="StartContainer for \"1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b\"" Dec 12 17:41:38.261916 containerd[1582]: time="2025-12-12T17:41:38.261511908Z" level=info msg="connecting to shim 1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b" address="unix:///run/containerd/s/fca0d6d138cfba585c98ad9990a50f78594ed87697b53dd1744b51218cffb8d5" protocol=ttrpc version=3 Dec 12 17:41:38.275000 audit: BPF prog-id=224 op=LOAD Dec 12 17:41:38.276000 audit: BPF prog-id=225 op=LOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.276000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.276000 audit: BPF prog-id=226 op=LOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.276000 audit: BPF prog-id=227 op=LOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.276000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.276000 audit: BPF prog-id=226 op=UNLOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.276000 audit: BPF prog-id=228 op=LOAD Dec 12 17:41:38.276000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4501 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633363338373936353432613364633937666361333331333935356532 Dec 12 17:41:38.277000 audit[4554]: NETFILTER_CFG table=filter:129 family=2 entries=53 op=nft_register_chain pid=4554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:38.277000 audit[4554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26640 a0=3 a1=fffff085f390 a2=0 a3=ffff947aefa8 items=0 ppid=3955 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:38.278430 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:38.290990 systemd[1]: Started cri-containerd-1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b.scope - libcontainer container 1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b. Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:37.905 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0 calico-apiserver-6d47949b7b- calico-apiserver 0d34868c-1018-4669-b30b-dbcccb35f648 851 0 2025-12-12 17:41:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d47949b7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d47949b7b-87fxs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie88c67252d2 [] [] }} ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:37.905 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:37.938 [INFO][4415] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" HandleID="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Workload="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:37.938 [INFO][4415] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" HandleID="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Workload="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d47949b7b-87fxs", "timestamp":"2025-12-12 17:41:37.9381449 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:37.938 [INFO][4415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.085 [INFO][4415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.085 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.146 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.163 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.170 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.172 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.175 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.175 [INFO][4415] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.181 [INFO][4415] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.199 [INFO][4415] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.213 [INFO][4415] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.213 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" host="localhost" Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.213 [INFO][4415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:38.291516 containerd[1582]: 2025-12-12 17:41:38.213 [INFO][4415] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" HandleID="k8s-pod-network.9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Workload="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.292022 containerd[1582]: 2025-12-12 17:41:38.220 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0", GenerateName:"calico-apiserver-6d47949b7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d34868c-1018-4669-b30b-dbcccb35f648", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d47949b7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d47949b7b-87fxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie88c67252d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:38.292022 containerd[1582]: 2025-12-12 17:41:38.222 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.292022 containerd[1582]: 2025-12-12 17:41:38.222 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie88c67252d2 ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.292022 containerd[1582]: 2025-12-12 17:41:38.226 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.292022 containerd[1582]: 2025-12-12 17:41:38.226 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0", GenerateName:"calico-apiserver-6d47949b7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d34868c-1018-4669-b30b-dbcccb35f648", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d47949b7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d", Pod:"calico-apiserver-6d47949b7b-87fxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie88c67252d2", MAC:"7a:60:fe:f6:a5:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:38.292022 containerd[1582]: 2025-12-12 17:41:38.255 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" Namespace="calico-apiserver" Pod="calico-apiserver-6d47949b7b-87fxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d47949b7b--87fxs-eth0" Dec 12 17:41:38.316000 audit: BPF prog-id=229 op=LOAD Dec 12 17:41:38.316000 audit: BPF prog-id=230 op=LOAD Dec 12 17:41:38.316000 audit[4542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.316000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:41:38.316000 audit[4542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.317000 audit: BPF prog-id=231 op=LOAD Dec 12 17:41:38.317000 audit[4542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.317000 audit: BPF prog-id=232 op=LOAD Dec 12 17:41:38.317000 audit[4542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.317000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:41:38.317000 audit[4542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.317000 audit: BPF prog-id=231 op=UNLOAD Dec 12 17:41:38.317000 audit[4542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.317000 audit: BPF prog-id=233 op=LOAD Dec 12 17:41:38.317000 audit[4542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4444 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333665383832613166643534653863323333636137353633306264 Dec 12 17:41:38.321674 containerd[1582]: time="2025-12-12T17:41:38.321622908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8ff6bd4cf-74cxw,Uid:6276624d-bee7-4066-84ff-d2e0529aa160,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"63638796542a3dc97fca3313955e2fccdf7a98a7dea8f417b78b579bb1b07493\"" Dec 12 17:41:38.327480 containerd[1582]: time="2025-12-12T17:41:38.327377167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:41:38.357526 containerd[1582]: time="2025-12-12T17:41:38.357352433Z" level=info msg="connecting to shim 9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d" address="unix:///run/containerd/s/e9c4866991b931457a2ff94d6637c632c0d8e3338df76e39007f225b6cbd57ca" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:38.358005 containerd[1582]: time="2025-12-12T17:41:38.357974703Z" level=info msg="StartContainer for \"1636e882a1fd54e8c233ca75630bda4e10d568f32ccfd4b46d7ae7bd0cd3fe4b\" returns successfully" Dec 12 17:41:38.386036 systemd[1]: Started cri-containerd-9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d.scope - libcontainer container 9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d. Dec 12 17:41:38.405965 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:52994.service - OpenSSH per-connection server daemon (10.0.0.1:52994). Dec 12 17:41:38.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.131:22-10.0.0.1:52994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:38.406000 audit: BPF prog-id=234 op=LOAD Dec 12 17:41:38.407000 audit: BPF prog-id=235 op=LOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.407000 audit: BPF prog-id=235 op=UNLOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.407000 audit: BPF prog-id=236 op=LOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.407000 audit: BPF prog-id=237 op=LOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.407000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.407000 audit: BPF prog-id=236 op=UNLOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.407000 audit: BPF prog-id=238 op=LOAD Dec 12 17:41:38.407000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4607 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963366262306662376132636634313838643137333565636139386533 Dec 12 17:41:38.412848 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:38.497015 containerd[1582]: time="2025-12-12T17:41:38.496952983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d47949b7b-87fxs,Uid:0d34868c-1018-4669-b30b-dbcccb35f648,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c6bb0fb7a2cf4188d1735eca98e3b0387fc69cf781a65b95ba29b9815b7e64d\"" Dec 12 17:41:38.500000 audit[4643]: USER_ACCT pid=4643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:38.502705 sshd[4643]: Accepted publickey for core from 10.0.0.1 port 52994 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:38.502000 audit[4643]: CRED_ACQ pid=4643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:38.502000 audit[4643]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeeab57e0 a2=3 a3=0 items=0 ppid=1 pid=4643 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.502000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:38.504219 sshd-session[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:38.513686 systemd-logind[1556]: New session 9 of user core. Dec 12 17:41:38.524105 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:41:38.525000 audit[4643]: USER_START pid=4643 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:38.527000 audit[4679]: CRED_ACQ pid=4679 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:38.568729 containerd[1582]: time="2025-12-12T17:41:38.568593824Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:38.580978 containerd[1582]: time="2025-12-12T17:41:38.580911564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:41:38.581095 containerd[1582]: time="2025-12-12T17:41:38.580959773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:38.581393 kubelet[2737]: E1212 17:41:38.581126 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:38.581393 kubelet[2737]: E1212 17:41:38.581166 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:38.581495 kubelet[2737]: E1212 17:41:38.581450 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8ff6bd4cf-74cxw_calico-apiserver(6276624d-bee7-4066-84ff-d2e0529aa160): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:38.581528 containerd[1582]: time="2025-12-12T17:41:38.581408492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:41:38.582103 kubelet[2737]: E1212 17:41:38.582054 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" podUID="6276624d-bee7-4066-84ff-d2e0529aa160" Dec 12 17:41:38.685396 sshd[4679]: Connection closed by 10.0.0.1 port 52994 Dec 12 17:41:38.686031 sshd-session[4643]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:38.686000 audit[4643]: USER_END pid=4643 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:38.686000 audit[4643]: CRED_DISP pid=4643 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:38.690296 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:52994.service: Deactivated successfully. Dec 12 17:41:38.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.131:22-10.0.0.1:52994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:38.693278 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:41:38.694233 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:41:38.695165 systemd-logind[1556]: Removed session 9. Dec 12 17:41:38.804071 containerd[1582]: time="2025-12-12T17:41:38.803969888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:38.809298 containerd[1582]: time="2025-12-12T17:41:38.809251422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:41:38.809385 containerd[1582]: time="2025-12-12T17:41:38.809340358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:38.809682 kubelet[2737]: E1212 17:41:38.809518 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:38.809682 kubelet[2737]: E1212 17:41:38.809673 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:38.809783 kubelet[2737]: E1212 17:41:38.809772 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d47949b7b-87fxs_calico-apiserver(0d34868c-1018-4669-b30b-dbcccb35f648): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:38.810017 kubelet[2737]: E1212 17:41:38.809917 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" podUID="0d34868c-1018-4669-b30b-dbcccb35f648" Dec 12 17:41:38.842478 containerd[1582]: time="2025-12-12T17:41:38.842371645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpm8r,Uid:e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:38.945788 systemd-networkd[1490]: calie973f66e3bb: Link UP Dec 12 17:41:38.947044 systemd-networkd[1490]: calie973f66e3bb: Gained carrier Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.881 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dpm8r-eth0 csi-node-driver- calico-system e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb 736 0 2025-12-12 17:41:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dpm8r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie973f66e3bb [] [] }} ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.882 [INFO][4698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.906 [INFO][4712] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" HandleID="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Workload="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.906 [INFO][4712] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" HandleID="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Workload="localhost-k8s-csi--node--driver--dpm8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dpm8r", "timestamp":"2025-12-12 17:41:38.906442626 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.906 [INFO][4712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.906 [INFO][4712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.906 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.916 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.922 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.926 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.927 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.930 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.930 [INFO][4712] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.931 [INFO][4712] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84 Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.934 [INFO][4712] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.940 [INFO][4712] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.940 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" host="localhost" Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.940 [INFO][4712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:38.963267 containerd[1582]: 2025-12-12 17:41:38.940 [INFO][4712] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" HandleID="k8s-pod-network.09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Workload="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.963885 containerd[1582]: 2025-12-12 17:41:38.943 [INFO][4698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpm8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dpm8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie973f66e3bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:38.963885 containerd[1582]: 2025-12-12 17:41:38.943 [INFO][4698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.963885 containerd[1582]: 2025-12-12 17:41:38.943 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie973f66e3bb ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.963885 containerd[1582]: 2025-12-12 17:41:38.947 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.963885 containerd[1582]: 2025-12-12 17:41:38.947 [INFO][4698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpm8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84", Pod:"csi-node-driver-dpm8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie973f66e3bb", MAC:"d2:b8:1d:d6:03:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:38.963885 containerd[1582]: 2025-12-12 17:41:38.958 [INFO][4698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" Namespace="calico-system" Pod="csi-node-driver-dpm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpm8r-eth0" Dec 12 17:41:38.971000 audit[4728]: NETFILTER_CFG table=filter:130 family=2 entries=62 op=nft_register_chain pid=4728 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:38.971000 audit[4728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28368 a0=3 a1=ffffd3c3f780 a2=0 a3=ffff99ae3fa8 items=0 ppid=3955 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:38.971000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:38.987488 containerd[1582]: time="2025-12-12T17:41:38.987165275Z" level=info msg="connecting to shim 09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84" address="unix:///run/containerd/s/96c4e2d4409bf02373050240b1ea7dc1d9d0defd4ef92e7d3080d0071d16ce1d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:39.010048 systemd[1]: Started cri-containerd-09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84.scope - libcontainer container 09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84. Dec 12 17:41:39.016069 kubelet[2737]: E1212 17:41:39.016032 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" podUID="0d34868c-1018-4669-b30b-dbcccb35f648" Dec 12 17:41:39.019335 kubelet[2737]: E1212 17:41:39.019253 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" podUID="6276624d-bee7-4066-84ff-d2e0529aa160" Dec 12 17:41:39.023645 kubelet[2737]: E1212 17:41:39.023620 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:39.024603 kubelet[2737]: E1212 17:41:39.023688 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:39.022000 audit: BPF prog-id=239 op=LOAD Dec 12 17:41:39.022000 audit: BPF prog-id=240 op=LOAD Dec 12 17:41:39.022000 audit[4749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.022000 audit: BPF prog-id=240 op=UNLOAD Dec 12 17:41:39.022000 audit[4749]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.022000 audit: BPF prog-id=241 op=LOAD Dec 12 17:41:39.022000 audit[4749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.022000 audit: BPF prog-id=242 op=LOAD Dec 12 17:41:39.022000 audit[4749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.022000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:41:39.022000 audit[4749]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.022000 audit: BPF prog-id=241 op=UNLOAD Dec 12 17:41:39.022000 audit[4749]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.023000 audit: BPF prog-id=243 op=LOAD Dec 12 17:41:39.023000 audit[4749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4738 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039666264653266363466393264623639313662653639323133326335 Dec 12 17:41:39.029274 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:39.051540 containerd[1582]: time="2025-12-12T17:41:39.051477598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpm8r,Uid:e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"09fbde2f64f92db6916be692132c554ecfa5665b11d9b09ca51e1420fe528d84\"" Dec 12 17:41:39.054711 containerd[1582]: time="2025-12-12T17:41:39.054567572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:41:39.058000 audit[4776]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:39.058000 audit[4776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc8a51520 a2=0 a3=1 items=0 ppid=2891 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:39.064007 kubelet[2737]: I1212 17:41:39.063952 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-79smq" podStartSLOduration=39.063934989 podStartE2EDuration="39.063934989s" podCreationTimestamp="2025-12-12 17:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:39.062854283 +0000 UTC m=+44.331817415" watchObservedRunningTime="2025-12-12 17:41:39.063934989 +0000 UTC m=+44.332898041" Dec 12 17:41:39.065000 audit[4776]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:39.065000 audit[4776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc8a51520 a2=0 a3=1 items=0 ppid=2891 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:39.269006 containerd[1582]: time="2025-12-12T17:41:39.268928985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:39.269965 containerd[1582]: time="2025-12-12T17:41:39.269854785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:41:39.269965 containerd[1582]: time="2025-12-12T17:41:39.269916716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:39.270154 kubelet[2737]: E1212 17:41:39.270099 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:41:39.270221 kubelet[2737]: E1212 17:41:39.270156 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:41:39.270587 kubelet[2737]: E1212 17:41:39.270245 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dpm8r_calico-system(e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:39.271172 containerd[1582]: time="2025-12-12T17:41:39.271134566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:41:39.465065 containerd[1582]: time="2025-12-12T17:41:39.465006242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:39.466009 containerd[1582]: time="2025-12-12T17:41:39.465965327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:41:39.466080 containerd[1582]: time="2025-12-12T17:41:39.466011575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:39.466242 kubelet[2737]: E1212 17:41:39.466192 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:41:39.466242 kubelet[2737]: E1212 17:41:39.466238 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:41:39.466334 kubelet[2737]: E1212 17:41:39.466310 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dpm8r_calico-system(e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:39.466413 kubelet[2737]: E1212 17:41:39.466353 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:39.571012 systemd-networkd[1490]: califd98dc1f31d: Gained IPv6LL Dec 12 17:41:39.698944 systemd-networkd[1490]: calie88c67252d2: Gained IPv6LL Dec 12 17:41:39.842708 containerd[1582]: time="2025-12-12T17:41:39.842611842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-glc24,Uid:42e96bc1-b1e8-4725-afa1-530d18ed87af,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:39.843661 kubelet[2737]: E1212 17:41:39.843474 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:39.844246 containerd[1582]: time="2025-12-12T17:41:39.843937591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7ln7x,Uid:9ff4b695-b070-4674-a703-cd00568559f5,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:39.958076 systemd-networkd[1490]: cali900719bb506: Link UP Dec 12 17:41:39.958953 systemd-networkd[1490]: cali900719bb506: Gained carrier Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.885 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--glc24-eth0 goldmane-7c778bb748- calico-system 42e96bc1-b1e8-4725-afa1-530d18ed87af 852 0 2025-12-12 17:41:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-glc24 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali900719bb506 [] [] }} ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.885 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.914 [INFO][4804] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" HandleID="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Workload="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.914 [INFO][4804] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" HandleID="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Workload="localhost-k8s-goldmane--7c778bb748--glc24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c35d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-glc24", "timestamp":"2025-12-12 17:41:39.914024253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.914 [INFO][4804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.914 [INFO][4804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.914 [INFO][4804] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.923 [INFO][4804] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.929 [INFO][4804] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.934 [INFO][4804] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.936 [INFO][4804] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.939 [INFO][4804] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.939 [INFO][4804] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.940 [INFO][4804] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172 Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.945 [INFO][4804] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.951 [INFO][4804] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.951 [INFO][4804] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" host="localhost" Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.951 [INFO][4804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:39.972585 containerd[1582]: 2025-12-12 17:41:39.951 [INFO][4804] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" HandleID="k8s-pod-network.232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Workload="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.973178 containerd[1582]: 2025-12-12 17:41:39.953 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--glc24-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"42e96bc1-b1e8-4725-afa1-530d18ed87af", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-glc24", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali900719bb506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:39.973178 containerd[1582]: 2025-12-12 17:41:39.954 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.973178 containerd[1582]: 2025-12-12 17:41:39.954 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali900719bb506 ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.973178 containerd[1582]: 2025-12-12 17:41:39.958 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.973178 containerd[1582]: 2025-12-12 17:41:39.960 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--glc24-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"42e96bc1-b1e8-4725-afa1-530d18ed87af", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172", Pod:"goldmane-7c778bb748-glc24", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali900719bb506", MAC:"de:3d:18:b4:53:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:39.973178 containerd[1582]: 2025-12-12 17:41:39.970 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" Namespace="calico-system" Pod="goldmane-7c778bb748-glc24" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--glc24-eth0" Dec 12 17:41:39.984000 audit[4829]: NETFILTER_CFG table=filter:133 family=2 entries=70 op=nft_register_chain pid=4829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:39.984000 audit[4829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=33956 a0=3 a1=fffff05c9100 a2=0 a3=ffffbcfe3fa8 items=0 ppid=3955 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:39.984000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:39.996908 containerd[1582]: time="2025-12-12T17:41:39.996867037Z" level=info msg="connecting to shim 232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172" address="unix:///run/containerd/s/08f07b55756ea0fcd8549d8071e2e4df4b63a2a237c4e29a06b17e6d015b36a5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:40.025265 systemd[1]: Started cri-containerd-232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172.scope - libcontainer container 232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172. Dec 12 17:41:40.029374 kubelet[2737]: E1212 17:41:40.029341 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:40.031432 kubelet[2737]: E1212 17:41:40.031340 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" podUID="0d34868c-1018-4669-b30b-dbcccb35f648" Dec 12 17:41:40.032046 kubelet[2737]: E1212 17:41:40.032011 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:40.051603 kubelet[2737]: E1212 17:41:40.050348 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" podUID="6276624d-bee7-4066-84ff-d2e0529aa160" Dec 12 17:41:40.061000 audit: BPF prog-id=244 op=LOAD Dec 12 17:41:40.064000 audit: BPF prog-id=245 op=LOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.064000 audit: BPF prog-id=245 op=UNLOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.064000 audit: BPF prog-id=246 op=LOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.064000 audit: BPF prog-id=247 op=LOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.064000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.064000 audit: BPF prog-id=246 op=UNLOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.064000 audit: BPF prog-id=248 op=LOAD Dec 12 17:41:40.064000 audit[4850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4838 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326131626332393964326362633966616265626261316465646230 Dec 12 17:41:40.067759 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:40.083917 systemd-networkd[1490]: calibaecfe24a41: Gained IPv6LL Dec 12 17:41:40.088916 systemd-networkd[1490]: calidc070f98955: Link UP Dec 12 17:41:40.089127 systemd-networkd[1490]: calidc070f98955: Gained carrier Dec 12 17:41:40.098000 audit[4873]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:40.098000 audit[4873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9bf20b0 a2=0 a3=1 items=0 ppid=2891 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.889 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--7ln7x-eth0 coredns-66bc5c9577- kube-system 9ff4b695-b070-4674-a703-cd00568559f5 853 0 2025-12-12 17:41:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-7ln7x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc070f98955 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.889 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.915 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" HandleID="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Workload="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.916 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" HandleID="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Workload="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-7ln7x", "timestamp":"2025-12-12 17:41:39.915179052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.916 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.952 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:39.952 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.025 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.035 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.051 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.054 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.057 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.057 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.062 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.067 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.078 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.079 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" host="localhost" Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.079 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:41:40.108742 containerd[1582]: 2025-12-12 17:41:40.079 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" HandleID="k8s-pod-network.7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Workload="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.109448 containerd[1582]: 2025-12-12 17:41:40.085 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7ln7x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9ff4b695-b070-4674-a703-cd00568559f5", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-7ln7x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc070f98955", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:40.109448 containerd[1582]: 2025-12-12 17:41:40.085 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.109448 containerd[1582]: 2025-12-12 17:41:40.086 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc070f98955 ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.109448 containerd[1582]: 2025-12-12 17:41:40.088 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.109448 containerd[1582]: 2025-12-12 17:41:40.088 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7ln7x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9ff4b695-b070-4674-a703-cd00568559f5", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc", Pod:"coredns-66bc5c9577-7ln7x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc070f98955", MAC:"22:b7:f4:46:19:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:41:40.109448 containerd[1582]: 2025-12-12 17:41:40.103 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" Namespace="kube-system" Pod="coredns-66bc5c9577-7ln7x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7ln7x-eth0" Dec 12 17:41:40.110000 audit[4873]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:40.110000 audit[4873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc9bf20b0 a2=0 a3=1 items=0 ppid=2891 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:40.125845 containerd[1582]: time="2025-12-12T17:41:40.124795382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-glc24,Uid:42e96bc1-b1e8-4725-afa1-530d18ed87af,Namespace:calico-system,Attempt:0,} returns sandbox id \"232a1bc299d2cbc9fabebba1dedb02cda6913b9a446f9cda649ed54dd8f59172\"" Dec 12 17:41:40.128851 containerd[1582]: time="2025-12-12T17:41:40.126635732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:41:40.130000 audit[4889]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=4889 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:41:40.130000 audit[4889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23876 a0=3 a1=ffffce421720 a2=0 a3=ffffb2432fa8 items=0 ppid=3955 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.130000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:41:40.134397 containerd[1582]: time="2025-12-12T17:41:40.134366435Z" level=info msg="connecting to shim 7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc" address="unix:///run/containerd/s/78a916685a93f25be2e9894f40f69dfc4af9fea11dc4732b177a814540747c7e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:40.167013 systemd[1]: Started cri-containerd-7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc.scope - libcontainer container 7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc. Dec 12 17:41:40.176000 audit: BPF prog-id=249 op=LOAD Dec 12 17:41:40.176000 audit: BPF prog-id=250 op=LOAD Dec 12 17:41:40.176000 audit[4911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.177000 audit: BPF prog-id=250 op=UNLOAD Dec 12 17:41:40.177000 audit[4911]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.177000 audit: BPF prog-id=251 op=LOAD Dec 12 17:41:40.177000 audit[4911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.177000 audit: BPF prog-id=252 op=LOAD Dec 12 17:41:40.177000 audit[4911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.177000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:41:40.177000 audit[4911]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.177000 audit: BPF prog-id=251 op=UNLOAD Dec 12 17:41:40.177000 audit[4911]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.177000 audit: BPF prog-id=253 op=LOAD Dec 12 17:41:40.177000 audit[4911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4899 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666163656230663938323037373961323361646338613636373733 Dec 12 17:41:40.178891 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:41:40.200562 containerd[1582]: time="2025-12-12T17:41:40.200524350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7ln7x,Uid:9ff4b695-b070-4674-a703-cd00568559f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc\"" Dec 12 17:41:40.201411 kubelet[2737]: E1212 17:41:40.201388 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:40.208420 containerd[1582]: time="2025-12-12T17:41:40.208379914Z" level=info msg="CreateContainer within sandbox \"7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:41:40.243116 containerd[1582]: time="2025-12-12T17:41:40.242266667Z" level=info msg="Container 5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:40.249317 containerd[1582]: time="2025-12-12T17:41:40.249256366Z" level=info msg="CreateContainer within sandbox \"7ffaceb0f9820779a23adc8a667737aa9f1974bb72211fd15acb763d36f4cdfc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1\"" Dec 12 17:41:40.249789 containerd[1582]: time="2025-12-12T17:41:40.249767292Z" level=info msg="StartContainer for \"5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1\"" Dec 12 17:41:40.250841 containerd[1582]: time="2025-12-12T17:41:40.250797986Z" level=info msg="connecting to shim 5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1" address="unix:///run/containerd/s/78a916685a93f25be2e9894f40f69dfc4af9fea11dc4732b177a814540747c7e" protocol=ttrpc version=3 Dec 12 17:41:40.270986 systemd[1]: Started cri-containerd-5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1.scope - libcontainer container 5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1. Dec 12 17:41:40.280000 audit: BPF prog-id=254 op=LOAD Dec 12 17:41:40.281000 audit: BPF prog-id=255 op=LOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.281000 audit: BPF prog-id=255 op=UNLOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.281000 audit: BPF prog-id=256 op=LOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.281000 audit: BPF prog-id=257 op=LOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.281000 audit: BPF prog-id=257 op=UNLOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.281000 audit: BPF prog-id=256 op=UNLOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.281000 audit: BPF prog-id=258 op=LOAD Dec 12 17:41:40.281000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4899 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561326364623632383639323337323535393534366339616337356565 Dec 12 17:41:40.298045 containerd[1582]: time="2025-12-12T17:41:40.297984301Z" level=info msg="StartContainer for \"5a2cdb628692372559546c9ac75ee914fd3c260684b540b36a656d07ba7131f1\" returns successfully" Dec 12 17:41:40.329550 containerd[1582]: time="2025-12-12T17:41:40.329475451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:40.330749 containerd[1582]: time="2025-12-12T17:41:40.330682654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:41:40.331005 containerd[1582]: time="2025-12-12T17:41:40.330721181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:40.331173 kubelet[2737]: E1212 17:41:40.331119 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:41:40.331173 kubelet[2737]: E1212 17:41:40.331161 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:41:40.331266 kubelet[2737]: E1212 17:41:40.331233 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-glc24_calico-system(42e96bc1-b1e8-4725-afa1-530d18ed87af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:40.331289 kubelet[2737]: E1212 17:41:40.331268 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-glc24" podUID="42e96bc1-b1e8-4725-afa1-530d18ed87af" Dec 12 17:41:40.915169 systemd-networkd[1490]: calie973f66e3bb: Gained IPv6LL Dec 12 17:41:41.034590 kubelet[2737]: E1212 17:41:41.034528 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-glc24" podUID="42e96bc1-b1e8-4725-afa1-530d18ed87af" Dec 12 17:41:41.037888 kubelet[2737]: E1212 17:41:41.037823 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:41.038854 kubelet[2737]: E1212 17:41:41.038831 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:41.039018 kubelet[2737]: E1212 17:41:41.038993 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:41.132000 audit[4970]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:41.135702 kernel: kauditd_printk_skb: 263 callbacks suppressed Dec 12 17:41:41.135763 kernel: audit: type=1325 audit(1765561301.132:752): table=filter:137 family=2 entries=14 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:41.135788 kernel: audit: type=1300 audit(1765561301.132:752): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd2044f80 a2=0 a3=1 items=0 ppid=2891 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:41.132000 audit[4970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd2044f80 a2=0 a3=1 items=0 ppid=2891 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:41.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:41.141414 kernel: audit: type=1327 audit(1765561301.132:752): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:41.148000 audit[4970]: NETFILTER_CFG table=nat:138 family=2 entries=44 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:41.148000 audit[4970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd2044f80 a2=0 a3=1 items=0 ppid=2891 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:41.155967 kernel: audit: type=1325 audit(1765561301.148:753): table=nat:138 family=2 entries=44 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:41.156022 kernel: audit: type=1300 audit(1765561301.148:753): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd2044f80 a2=0 a3=1 items=0 ppid=2891 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:41.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:41.157619 kernel: audit: type=1327 audit(1765561301.148:753): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:41.554923 systemd-networkd[1490]: calidc070f98955: Gained IPv6LL Dec 12 17:41:41.938983 systemd-networkd[1490]: cali900719bb506: Gained IPv6LL Dec 12 17:41:42.039062 kubelet[2737]: E1212 17:41:42.039020 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:42.039379 kubelet[2737]: E1212 17:41:42.039183 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:42.040068 kubelet[2737]: E1212 17:41:42.040038 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-glc24" podUID="42e96bc1-b1e8-4725-afa1-530d18ed87af" Dec 12 17:41:42.050247 kubelet[2737]: I1212 17:41:42.050189 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7ln7x" podStartSLOduration=42.050173181 podStartE2EDuration="42.050173181s" podCreationTimestamp="2025-12-12 17:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:41.076051956 +0000 UTC m=+46.345015048" watchObservedRunningTime="2025-12-12 17:41:42.050173181 +0000 UTC m=+47.319136273" Dec 12 17:41:42.178000 audit[4978]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:42.178000 audit[4978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe1cb53c0 a2=0 a3=1 items=0 ppid=2891 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:42.185542 kernel: audit: type=1325 audit(1765561302.178:754): table=filter:139 family=2 entries=14 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:42.185617 kernel: audit: type=1300 audit(1765561302.178:754): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe1cb53c0 a2=0 a3=1 items=0 ppid=2891 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:42.185650 kernel: audit: type=1327 audit(1765561302.178:754): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:42.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:42.192000 audit[4978]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:42.192000 audit[4978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe1cb53c0 a2=0 a3=1 items=0 ppid=2891 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:42.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:42.197465 kernel: audit: type=1325 audit(1765561302.192:755): table=nat:140 family=2 entries=56 op=nft_register_chain pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:43.040621 kubelet[2737]: E1212 17:41:43.040593 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:43.701655 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:58406.service - OpenSSH per-connection server daemon (10.0.0.1:58406). Dec 12 17:41:43.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.131:22-10.0.0.1:58406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:43.761000 audit[4985]: USER_ACCT pid=4985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:43.762298 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 58406 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:43.762000 audit[4985]: CRED_ACQ pid=4985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:43.762000 audit[4985]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebb0ae60 a2=3 a3=0 items=0 ppid=1 pid=4985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:43.762000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:43.764241 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:43.768851 systemd-logind[1556]: New session 10 of user core. Dec 12 17:41:43.776037 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:41:43.777000 audit[4985]: USER_START pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:43.778000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:43.842907 containerd[1582]: time="2025-12-12T17:41:43.842868923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:41:43.924838 sshd[4988]: Connection closed by 10.0.0.1 port 58406 Dec 12 17:41:43.925207 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:43.927000 audit[4985]: USER_END pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:43.927000 audit[4985]: CRED_DISP pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:43.936016 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:58406.service: Deactivated successfully. Dec 12 17:41:43.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.131:22-10.0.0.1:58406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:43.937787 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:41:43.939072 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:41:43.941482 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:58410.service - OpenSSH per-connection server daemon (10.0.0.1:58410). Dec 12 17:41:43.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.131:22-10.0.0.1:58410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:43.942903 systemd-logind[1556]: Removed session 10. Dec 12 17:41:43.999000 audit[5002]: USER_ACCT pid=5002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.000901 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:44.000000 audit[5002]: CRED_ACQ pid=5002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.000000 audit[5002]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe49987e0 a2=3 a3=0 items=0 ppid=1 pid=5002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:44.000000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:44.002104 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:44.008739 systemd-logind[1556]: New session 11 of user core. Dec 12 17:41:44.019017 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:41:44.020000 audit[5002]: USER_START pid=5002 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.022000 audit[5005]: CRED_ACQ pid=5005 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.043108 kubelet[2737]: E1212 17:41:44.043071 2737 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:41:44.046051 containerd[1582]: time="2025-12-12T17:41:44.046014969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:44.047146 containerd[1582]: time="2025-12-12T17:41:44.047102257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:41:44.047225 containerd[1582]: time="2025-12-12T17:41:44.047174308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:44.047361 kubelet[2737]: E1212 17:41:44.047322 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:41:44.047404 kubelet[2737]: E1212 17:41:44.047368 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:41:44.047457 kubelet[2737]: E1212 17:41:44.047433 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-75798fb48-m5jp4_calico-system(cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:44.048374 containerd[1582]: time="2025-12-12T17:41:44.048159220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:41:44.186152 sshd[5005]: Connection closed by 10.0.0.1 port 58410 Dec 12 17:41:44.186350 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:44.188000 audit[5002]: USER_END pid=5002 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.188000 audit[5002]: CRED_DISP pid=5002 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.198703 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:58410.service: Deactivated successfully. Dec 12 17:41:44.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.131:22-10.0.0.1:58410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:44.203554 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:41:44.206511 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:41:44.213930 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:58418.service - OpenSSH per-connection server daemon (10.0.0.1:58418). Dec 12 17:41:44.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.131:22-10.0.0.1:58418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:44.216585 systemd-logind[1556]: Removed session 11. Dec 12 17:41:44.261395 containerd[1582]: time="2025-12-12T17:41:44.261156214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:44.263961 containerd[1582]: time="2025-12-12T17:41:44.263907400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:41:44.264137 containerd[1582]: time="2025-12-12T17:41:44.264011576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:44.264457 kubelet[2737]: E1212 17:41:44.264318 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:41:44.264457 kubelet[2737]: E1212 17:41:44.264376 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:41:44.264457 kubelet[2737]: E1212 17:41:44.264443 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-75798fb48-m5jp4_calico-system(cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:44.264619 kubelet[2737]: E1212 17:41:44.264484 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75798fb48-m5jp4" podUID="cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a" Dec 12 17:41:44.273000 audit[5016]: USER_ACCT pid=5016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.274465 sshd[5016]: Accepted publickey for core from 10.0.0.1 port 58418 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:44.276473 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:44.275000 audit[5016]: CRED_ACQ pid=5016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.275000 audit[5016]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc23bf0b0 a2=3 a3=0 items=0 ppid=1 pid=5016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:44.275000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:44.281471 systemd-logind[1556]: New session 12 of user core. Dec 12 17:41:44.288022 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:41:44.289000 audit[5016]: USER_START pid=5016 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.291000 audit[5019]: CRED_ACQ pid=5019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.416237 sshd[5019]: Connection closed by 10.0.0.1 port 58418 Dec 12 17:41:44.416600 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:44.417000 audit[5016]: USER_END pid=5016 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.417000 audit[5016]: CRED_DISP pid=5016 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:44.421042 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:58418.service: Deactivated successfully. Dec 12 17:41:44.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.131:22-10.0.0.1:58418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:44.423046 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:41:44.423846 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:41:44.425463 systemd-logind[1556]: Removed session 12. Dec 12 17:41:48.841074 containerd[1582]: time="2025-12-12T17:41:48.840987938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:41:49.129139 containerd[1582]: time="2025-12-12T17:41:49.129069527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:49.130348 containerd[1582]: time="2025-12-12T17:41:49.130287020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:41:49.130417 containerd[1582]: time="2025-12-12T17:41:49.130299741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:49.130631 kubelet[2737]: E1212 17:41:49.130571 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:41:49.130631 kubelet[2737]: E1212 17:41:49.130631 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:41:49.131203 kubelet[2737]: E1212 17:41:49.130709 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-64f8b9c58f-6dld8_calico-system(8b64634c-702a-4400-b949-31628cf96118): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:49.131203 kubelet[2737]: E1212 17:41:49.130740 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" podUID="8b64634c-702a-4400-b949-31628cf96118" Dec 12 17:41:49.440833 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Dec 12 17:41:49.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.131:22-10.0.0.1:58434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:49.442306 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 12 17:41:49.442381 kernel: audit: type=1130 audit(1765561309.439:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.131:22-10.0.0.1:58434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:49.507704 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:49.505000 audit[5037]: USER_ACCT pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.512463 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:49.510000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.516668 kernel: audit: type=1101 audit(1765561309.505:784): pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.516745 kernel: audit: type=1103 audit(1765561309.510:785): pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.519296 kernel: audit: type=1006 audit(1765561309.510:786): pid=5037 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:41:49.519383 kernel: audit: type=1300 audit(1765561309.510:786): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6573f20 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:49.510000 audit[5037]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6573f20 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:49.521460 systemd-logind[1556]: New session 13 of user core. Dec 12 17:41:49.523840 kernel: audit: type=1327 audit(1765561309.510:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:49.510000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:49.531406 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:41:49.533000 audit[5037]: USER_START pid=5037 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.535000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.543332 kernel: audit: type=1105 audit(1765561309.533:787): pid=5037 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.543466 kernel: audit: type=1103 audit(1765561309.535:788): pid=5040 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.632081 sshd[5040]: Connection closed by 10.0.0.1 port 58434 Dec 12 17:41:49.632881 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:49.632000 audit[5037]: USER_END pid=5037 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.632000 audit[5037]: CRED_DISP pid=5037 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.641579 kernel: audit: type=1106 audit(1765561309.632:789): pid=5037 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.641708 kernel: audit: type=1104 audit(1765561309.632:790): pid=5037 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.645828 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:58434.service: Deactivated successfully. Dec 12 17:41:49.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.131:22-10.0.0.1:58434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:49.648026 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:41:49.649167 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:41:49.651783 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Dec 12 17:41:49.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.131:22-10.0.0.1:58450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:49.652587 systemd-logind[1556]: Removed session 13. Dec 12 17:41:49.709000 audit[5053]: USER_ACCT pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.711736 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:49.711000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.711000 audit[5053]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6c67e50 a2=3 a3=0 items=0 ppid=1 pid=5053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:49.711000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:49.714145 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:49.721831 systemd-logind[1556]: New session 14 of user core. Dec 12 17:41:49.734080 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:41:49.735000 audit[5053]: USER_START pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.737000 audit[5056]: CRED_ACQ pid=5056 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.913771 sshd[5056]: Connection closed by 10.0.0.1 port 58450 Dec 12 17:41:49.914125 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:49.913000 audit[5053]: USER_END pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.913000 audit[5053]: CRED_DISP pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:49.927291 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:58450.service: Deactivated successfully. Dec 12 17:41:49.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.131:22-10.0.0.1:58450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:49.929112 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:41:49.929820 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:41:49.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.131:22-10.0.0.1:58454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:49.932368 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:58454.service - OpenSSH per-connection server daemon (10.0.0.1:58454). Dec 12 17:41:49.933071 systemd-logind[1556]: Removed session 14. Dec 12 17:41:50.007064 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 58454 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:50.005000 audit[5069]: USER_ACCT pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.007000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.007000 audit[5069]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7caf640 a2=3 a3=0 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:50.007000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:50.009718 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:50.014949 systemd-logind[1556]: New session 15 of user core. Dec 12 17:41:50.026058 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:41:50.026000 audit[5069]: USER_START pid=5069 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.028000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.638000 audit[5090]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:50.638000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc4cafd10 a2=0 a3=1 items=0 ppid=2891 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:50.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:50.646000 audit[5090]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:50.646000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc4cafd10 a2=0 a3=1 items=0 ppid=2891 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:50.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:50.651826 sshd[5072]: Connection closed by 10.0.0.1 port 58454 Dec 12 17:41:50.652558 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:50.653000 audit[5069]: USER_END pid=5069 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.654000 audit[5069]: CRED_DISP pid=5069 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.668041 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:58454.service: Deactivated successfully. Dec 12 17:41:50.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.131:22-10.0.0.1:58454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:50.670326 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:41:50.671986 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:41:50.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.131:22-10.0.0.1:58470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:50.674441 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:58470.service - OpenSSH per-connection server daemon (10.0.0.1:58470). Dec 12 17:41:50.677659 systemd-logind[1556]: Removed session 15. Dec 12 17:41:50.742000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.744065 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 58470 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:50.745000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.745000 audit[5095]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed48eb30 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:50.745000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:50.746552 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:50.751265 systemd-logind[1556]: New session 16 of user core. Dec 12 17:41:50.761083 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:41:50.762000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.764000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:50.842783 containerd[1582]: time="2025-12-12T17:41:50.842736382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:41:51.074725 containerd[1582]: time="2025-12-12T17:41:51.074603718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:51.078131 containerd[1582]: time="2025-12-12T17:41:51.076212259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:41:51.078131 containerd[1582]: time="2025-12-12T17:41:51.076240023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:51.078432 kubelet[2737]: E1212 17:41:51.078386 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:51.079292 kubelet[2737]: E1212 17:41:51.078793 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:51.079292 kubelet[2737]: E1212 17:41:51.078914 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8ff6bd4cf-pcmpc_calico-apiserver(59335d98-4d93-4df2-bc4b-c4c82b6bcd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:51.079502 kubelet[2737]: E1212 17:41:51.078943 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" podUID="59335d98-4d93-4df2-bc4b-c4c82b6bcd24" Dec 12 17:41:51.082143 sshd[5098]: Connection closed by 10.0.0.1 port 58470 Dec 12 17:41:51.083941 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:51.088000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.088000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.131:22-10.0.0.1:58470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:51.099766 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:58470.service: Deactivated successfully. Dec 12 17:41:51.102650 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:41:51.103441 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:41:51.107510 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:36012.service - OpenSSH per-connection server daemon (10.0.0.1:36012). Dec 12 17:41:51.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.131:22-10.0.0.1:36012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:51.109723 systemd-logind[1556]: Removed session 16. Dec 12 17:41:51.167000 audit[5109]: USER_ACCT pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.168618 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 36012 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:51.168000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.168000 audit[5109]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe404d090 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:51.168000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:51.169956 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:51.174894 systemd-logind[1556]: New session 17 of user core. Dec 12 17:41:51.183040 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:41:51.184000 audit[5109]: USER_START pid=5109 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.186000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.281331 sshd[5112]: Connection closed by 10.0.0.1 port 36012 Dec 12 17:41:51.281774 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:51.282000 audit[5109]: USER_END pid=5109 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.282000 audit[5109]: CRED_DISP pid=5109 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:51.286339 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:36012.service: Deactivated successfully. Dec 12 17:41:51.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.131:22-10.0.0.1:36012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:51.289444 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:41:51.291565 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:41:51.292483 systemd-logind[1556]: Removed session 17. Dec 12 17:41:51.663000 audit[5131]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:51.663000 audit[5131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe0368120 a2=0 a3=1 items=0 ppid=2891 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:51.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:51.672000 audit[5131]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:51.672000 audit[5131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe0368120 a2=0 a3=1 items=0 ppid=2891 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:51.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:52.843463 containerd[1582]: time="2025-12-12T17:41:52.843420729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:41:53.054486 containerd[1582]: time="2025-12-12T17:41:53.054426269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:53.055614 containerd[1582]: time="2025-12-12T17:41:53.055578343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:41:53.055687 containerd[1582]: time="2025-12-12T17:41:53.055614788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:53.056038 kubelet[2737]: E1212 17:41:53.055975 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:41:53.056364 kubelet[2737]: E1212 17:41:53.056088 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:41:53.056364 kubelet[2737]: E1212 17:41:53.056214 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-glc24_calico-system(42e96bc1-b1e8-4725-afa1-530d18ed87af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:53.057016 kubelet[2737]: E1212 17:41:53.056876 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-glc24" podUID="42e96bc1-b1e8-4725-afa1-530d18ed87af" Dec 12 17:41:53.842290 containerd[1582]: time="2025-12-12T17:41:53.842253475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:41:54.045062 containerd[1582]: time="2025-12-12T17:41:54.044986117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:54.047846 containerd[1582]: time="2025-12-12T17:41:54.046772313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:41:54.048073 containerd[1582]: time="2025-12-12T17:41:54.046824760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:54.048355 kubelet[2737]: E1212 17:41:54.048307 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:41:54.048416 kubelet[2737]: E1212 17:41:54.048361 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:41:54.048823 kubelet[2737]: E1212 17:41:54.048562 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dpm8r_calico-system(e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:54.048887 containerd[1582]: time="2025-12-12T17:41:54.048660243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:41:54.243560 containerd[1582]: time="2025-12-12T17:41:54.243502940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:54.244544 containerd[1582]: time="2025-12-12T17:41:54.244493791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:41:54.244633 containerd[1582]: time="2025-12-12T17:41:54.244569241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:54.244736 kubelet[2737]: E1212 17:41:54.244695 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:54.245092 kubelet[2737]: E1212 17:41:54.244745 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:54.245540 containerd[1582]: time="2025-12-12T17:41:54.245468240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:41:54.245752 kubelet[2737]: E1212 17:41:54.244942 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d47949b7b-87fxs_calico-apiserver(0d34868c-1018-4669-b30b-dbcccb35f648): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:54.246068 kubelet[2737]: E1212 17:41:54.245767 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d47949b7b-87fxs" podUID="0d34868c-1018-4669-b30b-dbcccb35f648" Dec 12 17:41:54.474823 containerd[1582]: time="2025-12-12T17:41:54.474306915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:54.475454 containerd[1582]: time="2025-12-12T17:41:54.475411581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:41:54.475689 kubelet[2737]: E1212 17:41:54.475640 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:41:54.475753 kubelet[2737]: E1212 17:41:54.475696 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:41:54.475779 kubelet[2737]: E1212 17:41:54.475760 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dpm8r_calico-system(e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:54.475848 containerd[1582]: time="2025-12-12T17:41:54.475503393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:54.476898 kubelet[2737]: E1212 17:41:54.476845 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:41:54.843706 containerd[1582]: time="2025-12-12T17:41:54.843663299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:41:55.089050 containerd[1582]: time="2025-12-12T17:41:55.088991299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:41:55.089968 containerd[1582]: time="2025-12-12T17:41:55.089933062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:41:55.090168 containerd[1582]: time="2025-12-12T17:41:55.089968107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:41:55.090428 kubelet[2737]: E1212 17:41:55.090225 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:55.090428 kubelet[2737]: E1212 17:41:55.090276 2737 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:41:55.090428 kubelet[2737]: E1212 17:41:55.090360 2737 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8ff6bd4cf-74cxw_calico-apiserver(6276624d-bee7-4066-84ff-d2e0529aa160): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:41:55.090428 kubelet[2737]: E1212 17:41:55.090391 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" podUID="6276624d-bee7-4066-84ff-d2e0529aa160" Dec 12 17:41:55.843370 kubelet[2737]: E1212 17:41:55.843245 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75798fb48-m5jp4" podUID="cc4e0f8f-eaf8-4fb6-aa42-5841d9a7072a" Dec 12 17:41:55.870000 audit[5135]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:55.874088 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:41:55.874165 kernel: audit: type=1325 audit(1765561315.870:832): table=filter:145 family=2 entries=26 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:55.870000 audit[5135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc0599110 a2=0 a3=1 items=0 ppid=2891 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:55.878778 kernel: audit: type=1300 audit(1765561315.870:832): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc0599110 a2=0 a3=1 items=0 ppid=2891 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:55.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:55.880735 kernel: audit: type=1327 audit(1765561315.870:832): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:55.883000 audit[5135]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:55.883000 audit[5135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc0599110 a2=0 a3=1 items=0 ppid=2891 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:55.893940 kernel: audit: type=1325 audit(1765561315.883:833): table=nat:146 family=2 entries=104 op=nft_register_chain pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:41:55.894005 kernel: audit: type=1300 audit(1765561315.883:833): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc0599110 a2=0 a3=1 items=0 ppid=2891 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:55.894026 kernel: audit: type=1327 audit(1765561315.883:833): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:55.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:41:56.299075 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:36028.service - OpenSSH per-connection server daemon (10.0.0.1:36028). Dec 12 17:41:56.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.131:22-10.0.0.1:36028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:56.302835 kernel: audit: type=1130 audit(1765561316.298:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.131:22-10.0.0.1:36028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:56.369000 audit[5137]: USER_ACCT pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.370489 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 36028 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:41:56.372379 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:41:56.371000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.376933 kernel: audit: type=1101 audit(1765561316.369:835): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.377028 kernel: audit: type=1103 audit(1765561316.371:836): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.379347 kernel: audit: type=1006 audit(1765561316.371:837): pid=5137 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 12 17:41:56.371000 audit[5137]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff10c5b50 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:41:56.371000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:41:56.383459 systemd-logind[1556]: New session 18 of user core. Dec 12 17:41:56.396042 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:41:56.398000 audit[5137]: USER_START pid=5137 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.399000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.508390 sshd[5140]: Connection closed by 10.0.0.1 port 36028 Dec 12 17:41:56.508939 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:56.508000 audit[5137]: USER_END pid=5137 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.508000 audit[5137]: CRED_DISP pid=5137 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:41:56.513132 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:36028.service: Deactivated successfully. Dec 12 17:41:56.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.131:22-10.0.0.1:36028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:41:56.515076 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:41:56.516045 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:41:56.517114 systemd-logind[1556]: Removed session 18. Dec 12 17:42:00.842010 kubelet[2737]: E1212 17:42:00.841935 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f8b9c58f-6dld8" podUID="8b64634c-702a-4400-b949-31628cf96118" Dec 12 17:42:01.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.131:22-10.0.0.1:35712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:42:01.521257 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:35712.service - OpenSSH per-connection server daemon (10.0.0.1:35712). Dec 12 17:42:01.522089 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:42:01.522170 kernel: audit: type=1130 audit(1765561321.520:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.131:22-10.0.0.1:35712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:42:01.580000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.581566 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 35712 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:42:01.584000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.586065 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:01.588607 kernel: audit: type=1101 audit(1765561321.580:844): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.588664 kernel: audit: type=1103 audit(1765561321.584:845): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.588682 kernel: audit: type=1006 audit(1765561321.584:846): pid=5157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 17:42:01.584000 audit[5157]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0eb4750 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:42:01.593593 kernel: audit: type=1300 audit(1765561321.584:846): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0eb4750 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:42:01.593654 kernel: audit: type=1327 audit(1765561321.584:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:42:01.584000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:42:01.594128 systemd-logind[1556]: New session 19 of user core. Dec 12 17:42:01.601005 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:42:01.603000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.608000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.612760 kernel: audit: type=1105 audit(1765561321.603:847): pid=5157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.612898 kernel: audit: type=1103 audit(1765561321.608:848): pid=5162 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.765243 sshd[5162]: Connection closed by 10.0.0.1 port 35712 Dec 12 17:42:01.765744 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:01.767000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.771657 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:42:01.768000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.776150 kernel: audit: type=1106 audit(1765561321.767:849): pid=5157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.776421 kernel: audit: type=1104 audit(1765561321.768:850): pid=5157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:01.772953 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:35712.service: Deactivated successfully. Dec 12 17:42:01.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.131:22-10.0.0.1:35712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:42:01.776974 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:42:01.779726 systemd-logind[1556]: Removed session 19. Dec 12 17:42:01.842041 kubelet[2737]: E1212 17:42:01.841972 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-pcmpc" podUID="59335d98-4d93-4df2-bc4b-c4c82b6bcd24" Dec 12 17:42:05.843096 kubelet[2737]: E1212 17:42:05.843020 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dpm8r" podUID="e9842b9a-1f71-48b6-9feb-99cf8f1cbcdb" Dec 12 17:42:06.788546 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:42:06.788669 kernel: audit: type=1130 audit(1765561326.782:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.131:22-10.0.0.1:35722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:42:06.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.131:22-10.0.0.1:35722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:42:06.783286 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:35722.service - OpenSSH per-connection server daemon (10.0.0.1:35722). Dec 12 17:42:06.842987 kubelet[2737]: E1212 17:42:06.842947 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-glc24" podUID="42e96bc1-b1e8-4725-afa1-530d18ed87af" Dec 12 17:42:06.846000 audit[5177]: USER_ACCT pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.847976 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 35722 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:42:06.850000 audit[5177]: CRED_ACQ pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.851797 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:06.854571 kernel: audit: type=1101 audit(1765561326.846:853): pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.854630 kernel: audit: type=1103 audit(1765561326.850:854): pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.858705 kernel: audit: type=1006 audit(1765561326.850:855): pid=5177 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 12 17:42:06.850000 audit[5177]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa40f7c0 a2=3 a3=0 items=0 ppid=1 pid=5177 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:42:06.863303 kernel: audit: type=1300 audit(1765561326.850:855): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa40f7c0 a2=3 a3=0 items=0 ppid=1 pid=5177 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:42:06.850000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:42:06.865065 kernel: audit: type=1327 audit(1765561326.850:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:42:06.868031 systemd-logind[1556]: New session 20 of user core. Dec 12 17:42:06.874022 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:42:06.876000 audit[5177]: USER_START pid=5177 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.880000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.884869 kernel: audit: type=1105 audit(1765561326.876:856): pid=5177 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.884930 kernel: audit: type=1103 audit(1765561326.880:857): pid=5180 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.982761 sshd[5180]: Connection closed by 10.0.0.1 port 35722 Dec 12 17:42:06.983294 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:06.985000 audit[5177]: USER_END pid=5177 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.989393 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:35722.service: Deactivated successfully. Dec 12 17:42:06.985000 audit[5177]: CRED_DISP pid=5177 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.993015 kernel: audit: type=1106 audit(1765561326.985:858): pid=5177 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.993122 kernel: audit: type=1104 audit(1765561326.985:859): pid=5177 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:42:06.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.131:22-10.0.0.1:35722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:42:06.993880 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:42:06.995406 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:42:06.997980 systemd-logind[1556]: Removed session 20. Dec 12 17:42:07.841459 kubelet[2737]: E1212 17:42:07.841409 2737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8ff6bd4cf-74cxw" podUID="6276624d-bee7-4066-84ff-d2e0529aa160"