Dec 13 23:06:30.271369 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 23:06:30.271393 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Sat Dec 13 21:04:10 -00 2025 Dec 13 23:06:30.271457 kernel: KASLR enabled Dec 13 23:06:30.271464 kernel: efi: EFI v2.7 by EDK II Dec 13 23:06:30.271470 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 13 23:06:30.271475 kernel: random: crng init done Dec 13 23:06:30.271483 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 13 23:06:30.271489 kernel: secureboot: Secure boot enabled Dec 13 23:06:30.271498 kernel: ACPI: Early table checksum verification disabled Dec 13 23:06:30.271504 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 13 23:06:30.271510 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 23:06:30.271516 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271522 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271529 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271538 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271544 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271551 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271557 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271564 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271570 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:06:30.271577 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 23:06:30.271583 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 13 23:06:30.271591 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 23:06:30.271597 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 13 23:06:30.271604 kernel: Zone ranges: Dec 13 23:06:30.271610 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 23:06:30.271616 kernel: DMA32 empty Dec 13 23:06:30.271623 kernel: Normal empty Dec 13 23:06:30.271629 kernel: Device empty Dec 13 23:06:30.271635 kernel: Movable zone start for each node Dec 13 23:06:30.271641 kernel: Early memory node ranges Dec 13 23:06:30.271648 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 13 23:06:30.271654 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 13 23:06:30.271661 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 13 23:06:30.271668 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 13 23:06:30.271675 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 13 23:06:30.271681 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 13 23:06:30.271688 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 13 23:06:30.271694 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 13 23:06:30.271700 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 23:06:30.271711 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 23:06:30.271718 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 23:06:30.271724 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 13 23:06:30.271731 kernel: psci: probing for conduit method from ACPI. Dec 13 23:06:30.271738 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 23:06:30.271745 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 23:06:30.271752 kernel: psci: Trusted OS migration not required Dec 13 23:06:30.271759 kernel: psci: SMC Calling Convention v1.1 Dec 13 23:06:30.271767 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 23:06:30.271774 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 13 23:06:30.271781 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 13 23:06:30.271788 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 23:06:30.271795 kernel: Detected PIPT I-cache on CPU0 Dec 13 23:06:30.271802 kernel: CPU features: detected: GIC system register CPU interface Dec 13 23:06:30.271809 kernel: CPU features: detected: Spectre-v4 Dec 13 23:06:30.271815 kernel: CPU features: detected: Spectre-BHB Dec 13 23:06:30.271822 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 23:06:30.271829 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 23:06:30.271836 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 23:06:30.271844 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 23:06:30.271851 kernel: alternatives: applying boot alternatives Dec 13 23:06:30.271858 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=44c63db9fd88171f565600c90d4cdf8b05fba369ef3a382917a5104525765913 Dec 13 23:06:30.271865 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 23:06:30.271872 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 23:06:30.271879 kernel: Fallback order for Node 0: 0 Dec 13 23:06:30.271886 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 13 23:06:30.271893 kernel: Policy zone: DMA Dec 13 23:06:30.271899 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 23:06:30.271906 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 13 23:06:30.271914 kernel: software IO TLB: area num 4. Dec 13 23:06:30.271921 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 13 23:06:30.271928 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 13 23:06:30.271935 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 23:06:30.271942 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 23:06:30.271950 kernel: rcu: RCU event tracing is enabled. Dec 13 23:06:30.271957 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 23:06:30.271964 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 23:06:30.271971 kernel: Tracing variant of Tasks RCU enabled. Dec 13 23:06:30.271977 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 23:06:30.271984 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 23:06:30.271991 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 23:06:30.271999 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 23:06:30.272006 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 23:06:30.272013 kernel: GICv3: 256 SPIs implemented Dec 13 23:06:30.272020 kernel: GICv3: 0 Extended SPIs implemented Dec 13 23:06:30.272027 kernel: Root IRQ handler: gic_handle_irq Dec 13 23:06:30.272033 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 23:06:30.272040 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 13 23:06:30.272047 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 23:06:30.272054 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 23:06:30.272061 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 13 23:06:30.272069 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 13 23:06:30.272077 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 13 23:06:30.272084 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 13 23:06:30.272091 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 23:06:30.272098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:06:30.272118 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 23:06:30.272125 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 23:06:30.272132 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 23:06:30.272139 kernel: arm-pv: using stolen time PV Dec 13 23:06:30.272147 kernel: Console: colour dummy device 80x25 Dec 13 23:06:30.272157 kernel: ACPI: Core revision 20240827 Dec 13 23:06:30.272165 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 23:06:30.272172 kernel: pid_max: default: 32768 minimum: 301 Dec 13 23:06:30.272179 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 23:06:30.272186 kernel: landlock: Up and running. Dec 13 23:06:30.272194 kernel: SELinux: Initializing. Dec 13 23:06:30.272201 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 23:06:30.272208 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 23:06:30.272217 kernel: rcu: Hierarchical SRCU implementation. Dec 13 23:06:30.272233 kernel: rcu: Max phase no-delay instances is 400. Dec 13 23:06:30.272240 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 23:06:30.272247 kernel: Remapping and enabling EFI services. Dec 13 23:06:30.272254 kernel: smp: Bringing up secondary CPUs ... Dec 13 23:06:30.272262 kernel: Detected PIPT I-cache on CPU1 Dec 13 23:06:30.272269 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 23:06:30.272278 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 13 23:06:30.272286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:06:30.272299 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 23:06:30.272307 kernel: Detected PIPT I-cache on CPU2 Dec 13 23:06:30.272315 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 23:06:30.272322 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 13 23:06:30.272330 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:06:30.272337 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 23:06:30.272345 kernel: Detected PIPT I-cache on CPU3 Dec 13 23:06:30.272354 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 23:06:30.272362 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 13 23:06:30.272369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:06:30.272376 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 23:06:30.272384 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 23:06:30.272393 kernel: SMP: Total of 4 processors activated. Dec 13 23:06:30.272400 kernel: CPU: All CPU(s) started at EL1 Dec 13 23:06:30.272408 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 23:06:30.272416 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 23:06:30.272424 kernel: CPU features: detected: Common not Private translations Dec 13 23:06:30.272431 kernel: CPU features: detected: CRC32 instructions Dec 13 23:06:30.272439 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 23:06:30.272448 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 23:06:30.272456 kernel: CPU features: detected: LSE atomic instructions Dec 13 23:06:30.272464 kernel: CPU features: detected: Privileged Access Never Dec 13 23:06:30.272472 kernel: CPU features: detected: RAS Extension Support Dec 13 23:06:30.272480 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 23:06:30.272487 kernel: alternatives: applying system-wide alternatives Dec 13 23:06:30.272495 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 13 23:06:30.272504 kernel: Memory: 2448740K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 101212K reserved, 16384K cma-reserved) Dec 13 23:06:30.272512 kernel: devtmpfs: initialized Dec 13 23:06:30.272520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 23:06:30.272528 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 23:06:30.272536 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 23:06:30.272544 kernel: 0 pages in range for non-PLT usage Dec 13 23:06:30.272551 kernel: 515168 pages in range for PLT usage Dec 13 23:06:30.272559 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 23:06:30.272568 kernel: SMBIOS 3.0.0 present. Dec 13 23:06:30.272579 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 23:06:30.272587 kernel: DMI: Memory slots populated: 1/1 Dec 13 23:06:30.272595 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 23:06:30.272603 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 23:06:30.272611 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 23:06:30.272619 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 23:06:30.272628 kernel: audit: initializing netlink subsys (disabled) Dec 13 23:06:30.272636 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 13 23:06:30.272646 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 23:06:30.272654 kernel: cpuidle: using governor menu Dec 13 23:06:30.272661 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 23:06:30.272671 kernel: ASID allocator initialised with 32768 entries Dec 13 23:06:30.272682 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 23:06:30.272692 kernel: Serial: AMBA PL011 UART driver Dec 13 23:06:30.272699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 23:06:30.272707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 23:06:30.272715 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 23:06:30.272723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 23:06:30.272731 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 23:06:30.272740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 23:06:30.272748 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 23:06:30.272757 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 23:06:30.272764 kernel: ACPI: Added _OSI(Module Device) Dec 13 23:06:30.272772 kernel: ACPI: Added _OSI(Processor Device) Dec 13 23:06:30.272780 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 23:06:30.272789 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 23:06:30.272798 kernel: ACPI: Interpreter enabled Dec 13 23:06:30.272806 kernel: ACPI: Using GIC for interrupt routing Dec 13 23:06:30.272816 kernel: ACPI: MCFG table detected, 1 entries Dec 13 23:06:30.272824 kernel: ACPI: CPU0 has been hot-added Dec 13 23:06:30.272835 kernel: ACPI: CPU1 has been hot-added Dec 13 23:06:30.272846 kernel: ACPI: CPU2 has been hot-added Dec 13 23:06:30.272853 kernel: ACPI: CPU3 has been hot-added Dec 13 23:06:30.272861 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 23:06:30.272869 kernel: printk: legacy console [ttyAMA0] enabled Dec 13 23:06:30.272878 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 23:06:30.273080 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 23:06:30.273194 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 23:06:30.273292 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 23:06:30.273373 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 23:06:30.273455 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 23:06:30.273470 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 23:06:30.273477 kernel: PCI host bridge to bus 0000:00 Dec 13 23:06:30.273568 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 23:06:30.273644 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 23:06:30.273719 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 23:06:30.273792 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 23:06:30.273893 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 13 23:06:30.273984 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 23:06:30.274071 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 13 23:06:30.274169 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 13 23:06:30.274261 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 23:06:30.274349 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 13 23:06:30.274430 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 13 23:06:30.274512 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 13 23:06:30.274595 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 23:06:30.274670 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 23:06:30.274745 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 23:06:30.274757 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 23:06:30.274765 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 23:06:30.274773 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 23:06:30.274781 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 23:06:30.274789 kernel: iommu: Default domain type: Translated Dec 13 23:06:30.274797 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 23:06:30.274804 kernel: efivars: Registered efivars operations Dec 13 23:06:30.274813 kernel: vgaarb: loaded Dec 13 23:06:30.274821 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 23:06:30.274829 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 23:06:30.274837 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 23:06:30.274844 kernel: pnp: PnP ACPI init Dec 13 23:06:30.274942 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 23:06:30.274956 kernel: pnp: PnP ACPI: found 1 devices Dec 13 23:06:30.274964 kernel: NET: Registered PF_INET protocol family Dec 13 23:06:30.274972 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 23:06:30.274980 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 23:06:30.274988 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 23:06:30.274995 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 23:06:30.275003 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 23:06:30.275012 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 23:06:30.275020 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 23:06:30.275028 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 23:06:30.275036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 23:06:30.275043 kernel: PCI: CLS 0 bytes, default 64 Dec 13 23:06:30.275051 kernel: kvm [1]: HYP mode not available Dec 13 23:06:30.275059 kernel: Initialise system trusted keyrings Dec 13 23:06:30.275067 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 23:06:30.275076 kernel: Key type asymmetric registered Dec 13 23:06:30.275083 kernel: Asymmetric key parser 'x509' registered Dec 13 23:06:30.275091 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 23:06:30.275099 kernel: io scheduler mq-deadline registered Dec 13 23:06:30.275118 kernel: io scheduler kyber registered Dec 13 23:06:30.275126 kernel: io scheduler bfq registered Dec 13 23:06:30.275133 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 23:06:30.275143 kernel: ACPI: button: Power Button [PWRB] Dec 13 23:06:30.275152 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 23:06:30.275250 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 23:06:30.275262 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 23:06:30.275270 kernel: thunder_xcv, ver 1.0 Dec 13 23:06:30.275277 kernel: thunder_bgx, ver 1.0 Dec 13 23:06:30.275285 kernel: nicpf, ver 1.0 Dec 13 23:06:30.275295 kernel: nicvf, ver 1.0 Dec 13 23:06:30.275387 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 23:06:30.275466 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-13T23:06:29 UTC (1765667189) Dec 13 23:06:30.275476 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 23:06:30.275484 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 13 23:06:30.275493 kernel: watchdog: NMI not fully supported Dec 13 23:06:30.275502 kernel: watchdog: Hard watchdog permanently disabled Dec 13 23:06:30.275510 kernel: NET: Registered PF_INET6 protocol family Dec 13 23:06:30.275517 kernel: Segment Routing with IPv6 Dec 13 23:06:30.275525 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 23:06:30.275533 kernel: NET: Registered PF_PACKET protocol family Dec 13 23:06:30.275540 kernel: Key type dns_resolver registered Dec 13 23:06:30.275548 kernel: registered taskstats version 1 Dec 13 23:06:30.275556 kernel: Loading compiled-in X.509 certificates Dec 13 23:06:30.275565 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: d89c978154dbb01b4a4598f2db878f2ea4aca29d' Dec 13 23:06:30.275573 kernel: Demotion targets for Node 0: null Dec 13 23:06:30.275580 kernel: Key type .fscrypt registered Dec 13 23:06:30.275588 kernel: Key type fscrypt-provisioning registered Dec 13 23:06:30.275596 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 23:06:30.275604 kernel: ima: Allocated hash algorithm: sha1 Dec 13 23:06:30.275613 kernel: ima: No architecture policies found Dec 13 23:06:30.275620 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 23:06:30.275628 kernel: clk: Disabling unused clocks Dec 13 23:06:30.275636 kernel: PM: genpd: Disabling unused power domains Dec 13 23:06:30.275644 kernel: Freeing unused kernel memory: 12480K Dec 13 23:06:30.275651 kernel: Run /init as init process Dec 13 23:06:30.275659 kernel: with arguments: Dec 13 23:06:30.275666 kernel: /init Dec 13 23:06:30.275675 kernel: with environment: Dec 13 23:06:30.275683 kernel: HOME=/ Dec 13 23:06:30.275690 kernel: TERM=linux Dec 13 23:06:30.275784 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 23:06:30.275864 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 23:06:30.275874 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 23:06:30.275884 kernel: GPT:16515071 != 27000831 Dec 13 23:06:30.275892 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 23:06:30.275899 kernel: GPT:16515071 != 27000831 Dec 13 23:06:30.275907 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 23:06:30.275914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 23:06:30.275922 kernel: SCSI subsystem initialized Dec 13 23:06:30.275930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 23:06:30.275939 kernel: device-mapper: uevent: version 1.0.3 Dec 13 23:06:30.275947 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 23:06:30.275954 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:06:30.275962 kernel: raid6: neonx8 gen() 15722 MB/s Dec 13 23:06:30.275970 kernel: raid6: neonx4 gen() 15584 MB/s Dec 13 23:06:30.275978 kernel: raid6: neonx2 gen() 13284 MB/s Dec 13 23:06:30.275985 kernel: raid6: neonx1 gen() 10423 MB/s Dec 13 23:06:30.275995 kernel: raid6: int64x8 gen() 6842 MB/s Dec 13 23:06:30.276003 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 23:06:30.276010 kernel: raid6: int64x2 gen() 6104 MB/s Dec 13 23:06:30.276017 kernel: raid6: int64x1 gen() 5056 MB/s Dec 13 23:06:30.276025 kernel: raid6: using algorithm neonx8 gen() 15722 MB/s Dec 13 23:06:30.276033 kernel: raid6: .... xor() 12064 MB/s, rmw enabled Dec 13 23:06:30.276040 kernel: raid6: using neon recovery algorithm Dec 13 23:06:30.276048 kernel: xor: measuring software checksum speed Dec 13 23:06:30.276057 kernel: 8regs : 21630 MB/sec Dec 13 23:06:30.276064 kernel: 32regs : 21681 MB/sec Dec 13 23:06:30.276072 kernel: arm64_neon : 28099 MB/sec Dec 13 23:06:30.276080 kernel: xor: using function: arm64_neon (28099 MB/sec) Dec 13 23:06:30.276088 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 23:06:30.276096 kernel: BTRFS: device fsid a1686a6f-a50a-4e68-84e0-ea41bcdb127c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (204) Dec 13 23:06:30.276117 kernel: BTRFS info (device dm-0): first mount of filesystem a1686a6f-a50a-4e68-84e0-ea41bcdb127c Dec 13 23:06:30.276127 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:06:30.276135 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 23:06:30.276142 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 23:06:30.276150 kernel: loop: module loaded Dec 13 23:06:30.276158 kernel: loop0: detected capacity change from 0 to 91832 Dec 13 23:06:30.276165 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 23:06:30.276174 systemd[1]: Successfully made /usr/ read-only. Dec 13 23:06:30.276186 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 23:06:30.276196 systemd[1]: Detected virtualization kvm. Dec 13 23:06:30.276204 systemd[1]: Detected architecture arm64. Dec 13 23:06:30.276212 systemd[1]: Running in initrd. Dec 13 23:06:30.276227 systemd[1]: No hostname configured, using default hostname. Dec 13 23:06:30.276237 systemd[1]: Hostname set to . Dec 13 23:06:30.276245 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 23:06:30.276254 systemd[1]: Queued start job for default target initrd.target. Dec 13 23:06:30.276262 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 23:06:30.276270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 23:06:30.276279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 23:06:30.276288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 23:06:30.276298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 23:06:30.276307 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 23:06:30.276316 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 23:06:30.276324 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 23:06:30.276333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 23:06:30.276343 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 23:06:30.276351 systemd[1]: Reached target paths.target - Path Units. Dec 13 23:06:30.276359 systemd[1]: Reached target slices.target - Slice Units. Dec 13 23:06:30.276367 systemd[1]: Reached target swap.target - Swaps. Dec 13 23:06:30.276376 systemd[1]: Reached target timers.target - Timer Units. Dec 13 23:06:30.276384 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 23:06:30.276392 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 23:06:30.276402 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 23:06:30.276411 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 23:06:30.276419 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 23:06:30.276427 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 23:06:30.276436 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 23:06:30.276452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 23:06:30.276462 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 23:06:30.276471 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 23:06:30.276480 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 23:06:30.276488 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 23:06:30.276497 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 23:06:30.276506 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 23:06:30.276516 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 23:06:30.276525 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 23:06:30.276533 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 23:06:30.276543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:06:30.276553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 23:06:30.276562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 23:06:30.276571 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 23:06:30.276584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 23:06:30.276619 systemd-journald[347]: Collecting audit messages is enabled. Dec 13 23:06:30.276648 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 23:06:30.276659 systemd-journald[347]: Journal started Dec 13 23:06:30.276678 systemd-journald[347]: Runtime Journal (/run/log/journal/445a64f773174dd39b68e52d49081e46) is 6M, max 48.5M, 42.4M free. Dec 13 23:06:30.277242 systemd-modules-load[348]: Inserted module 'br_netfilter' Dec 13 23:06:30.278846 kernel: Bridge firewalling registered Dec 13 23:06:30.278868 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 23:06:30.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.280638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 23:06:30.286292 kernel: audit: type=1130 audit(1765667190.279:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.286326 kernel: audit: type=1130 audit(1765667190.282:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.286297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:06:30.289167 kernel: audit: type=1130 audit(1765667190.288:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.291461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 23:06:30.296250 kernel: audit: type=1130 audit(1765667190.292:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.295344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 23:06:30.298014 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 23:06:30.299835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 23:06:30.314252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 23:06:30.325240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 23:06:30.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.328002 systemd-tmpfiles[371]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 23:06:30.331649 kernel: audit: type=1130 audit(1765667190.326:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.329896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 23:06:30.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.336642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 23:06:30.341627 kernel: audit: type=1130 audit(1765667190.333:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.341653 kernel: audit: type=1130 audit(1765667190.337:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.338216 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 23:06:30.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.344634 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 23:06:30.348124 kernel: audit: type=1130 audit(1765667190.342:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.348148 kernel: audit: type=1334 audit(1765667190.347:10): prog-id=6 op=LOAD Dec 13 23:06:30.347000 audit: BPF prog-id=6 op=LOAD Dec 13 23:06:30.348709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 23:06:30.369499 dracut-cmdline[387]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=44c63db9fd88171f565600c90d4cdf8b05fba369ef3a382917a5104525765913 Dec 13 23:06:30.392199 systemd-resolved[388]: Positive Trust Anchors: Dec 13 23:06:30.392226 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 23:06:30.392230 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 23:06:30.392261 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 23:06:30.418331 systemd-resolved[388]: Defaulting to hostname 'linux'. Dec 13 23:06:30.419201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 23:06:30.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.420339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 23:06:30.458140 kernel: Loading iSCSI transport class v2.0-870. Dec 13 23:06:30.468165 kernel: iscsi: registered transport (tcp) Dec 13 23:06:30.482123 kernel: iscsi: registered transport (qla4xxx) Dec 13 23:06:30.482177 kernel: QLogic iSCSI HBA Driver Dec 13 23:06:30.504678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 23:06:30.530285 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 23:06:30.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.532599 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 23:06:30.583212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 23:06:30.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.585462 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 23:06:30.588353 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 23:06:30.634798 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 23:06:30.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.636000 audit: BPF prog-id=7 op=LOAD Dec 13 23:06:30.637000 audit: BPF prog-id=8 op=LOAD Dec 13 23:06:30.637792 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 23:06:30.668833 systemd-udevd[632]: Using default interface naming scheme 'v257'. Dec 13 23:06:30.677199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 23:06:30.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.680544 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 23:06:30.707552 dracut-pre-trigger[699]: rd.md=0: removing MD RAID activation Dec 13 23:06:30.711252 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 23:06:30.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.714000 audit: BPF prog-id=9 op=LOAD Dec 13 23:06:30.715934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 23:06:30.738244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 23:06:30.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.742307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 23:06:30.763283 systemd-networkd[747]: lo: Link UP Dec 13 23:06:30.763292 systemd-networkd[747]: lo: Gained carrier Dec 13 23:06:30.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.763808 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 23:06:30.765287 systemd[1]: Reached target network.target - Network. Dec 13 23:06:30.814124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 23:06:30.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.818080 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 23:06:30.859781 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 23:06:30.873203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 23:06:30.885400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 23:06:30.892425 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 23:06:30.894581 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 23:06:30.905637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 23:06:30.905757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:06:30.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.907949 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:06:30.911366 systemd-networkd[747]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:06:30.911379 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 23:06:30.912097 systemd-networkd[747]: eth0: Link UP Dec 13 23:06:30.912303 systemd-networkd[747]: eth0: Gained carrier Dec 13 23:06:30.912314 systemd-networkd[747]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:06:30.914386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:06:30.922069 disk-uuid[807]: Primary Header is updated. Dec 13 23:06:30.922069 disk-uuid[807]: Secondary Entries is updated. Dec 13 23:06:30.922069 disk-uuid[807]: Secondary Header is updated. Dec 13 23:06:30.928168 systemd-networkd[747]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 23:06:30.942653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 23:06:30.944381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 23:06:30.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.948235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 23:06:30.950399 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 23:06:30.955279 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 23:06:30.958651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:06:30.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:30.984774 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 23:06:30.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:31.952615 disk-uuid[808]: Warning: The kernel is still using the old partition table. Dec 13 23:06:31.952615 disk-uuid[808]: The new table will be used at the next reboot or after you Dec 13 23:06:31.952615 disk-uuid[808]: run partprobe(8) or kpartx(8) Dec 13 23:06:31.952615 disk-uuid[808]: The operation has completed successfully. Dec 13 23:06:31.958069 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 23:06:31.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:31.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:31.958186 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 23:06:31.960715 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 23:06:31.989120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Dec 13 23:06:31.992263 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:06:31.992325 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:06:31.995128 kernel: BTRFS info (device vda6): turning on async discard Dec 13 23:06:31.995157 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 23:06:32.001146 kernel: BTRFS info (device vda6): last unmount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:06:32.001491 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 23:06:32.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.003514 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 23:06:32.119775 ignition[856]: Ignition 2.24.0 Dec 13 23:06:32.119789 ignition[856]: Stage: fetch-offline Dec 13 23:06:32.119832 ignition[856]: no configs at "/usr/lib/ignition/base.d" Dec 13 23:06:32.119842 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:06:32.119995 ignition[856]: parsed url from cmdline: "" Dec 13 23:06:32.119999 ignition[856]: no config URL provided Dec 13 23:06:32.120676 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 23:06:32.120688 ignition[856]: no config at "/usr/lib/ignition/user.ign" Dec 13 23:06:32.120735 ignition[856]: op(1): [started] loading QEMU firmware config module Dec 13 23:06:32.120740 ignition[856]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 23:06:32.129266 ignition[856]: op(1): [finished] loading QEMU firmware config module Dec 13 23:06:32.171073 ignition[856]: parsing config with SHA512: 64c500a9901a56a613944a43cc4158d856f0311afb5bfcd809b7305f55628884e8ab13ad3ab9f40de65241dfcd53b655854240b8af8c339960e07354933f3b50 Dec 13 23:06:32.175608 unknown[856]: fetched base config from "system" Dec 13 23:06:32.175619 unknown[856]: fetched user config from "qemu" Dec 13 23:06:32.177283 ignition[856]: fetch-offline: fetch-offline passed Dec 13 23:06:32.177365 ignition[856]: Ignition finished successfully Dec 13 23:06:32.179306 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 23:06:32.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.180626 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 23:06:32.181568 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 23:06:32.209699 ignition[872]: Ignition 2.24.0 Dec 13 23:06:32.209715 ignition[872]: Stage: kargs Dec 13 23:06:32.209872 ignition[872]: no configs at "/usr/lib/ignition/base.d" Dec 13 23:06:32.209884 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:06:32.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.212947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 23:06:32.210738 ignition[872]: kargs: kargs passed Dec 13 23:06:32.215267 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 23:06:32.210785 ignition[872]: Ignition finished successfully Dec 13 23:06:32.241618 ignition[879]: Ignition 2.24.0 Dec 13 23:06:32.241638 ignition[879]: Stage: disks Dec 13 23:06:32.241796 ignition[879]: no configs at "/usr/lib/ignition/base.d" Dec 13 23:06:32.244807 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 23:06:32.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.241805 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:06:32.245976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 23:06:32.242641 ignition[879]: disks: disks passed Dec 13 23:06:32.247529 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 23:06:32.242691 ignition[879]: Ignition finished successfully Dec 13 23:06:32.249498 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 23:06:32.251203 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 23:06:32.252599 systemd[1]: Reached target basic.target - Basic System. Dec 13 23:06:32.256560 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 23:06:32.293279 systemd-fsck[888]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 23:06:32.367265 systemd-networkd[747]: eth0: Gained IPv6LL Dec 13 23:06:32.430664 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 23:06:32.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.436473 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 23:06:32.513400 kernel: EXT4-fs (vda9): mounted filesystem b02592d5-55bb-4524-99a1-b54eb9e1980a r/w with ordered data mode. Quota mode: none. Dec 13 23:06:32.513729 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 23:06:32.515523 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 23:06:32.519398 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 23:06:32.524128 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 23:06:32.526051 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 23:06:32.527733 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 23:06:32.527771 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 23:06:32.540999 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 23:06:32.543876 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 23:06:32.552875 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (896) Dec 13 23:06:32.552937 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:06:32.552954 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:06:32.557654 kernel: BTRFS info (device vda6): turning on async discard Dec 13 23:06:32.557722 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 23:06:32.558951 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 23:06:32.679737 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 23:06:32.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.682404 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 23:06:32.685637 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 23:06:32.708672 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 23:06:32.713118 kernel: BTRFS info (device vda6): last unmount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:06:32.732988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 23:06:32.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.744701 ignition[994]: INFO : Ignition 2.24.0 Dec 13 23:06:32.744701 ignition[994]: INFO : Stage: mount Dec 13 23:06:32.746403 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 23:06:32.746403 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:06:32.746403 ignition[994]: INFO : mount: mount passed Dec 13 23:06:32.746403 ignition[994]: INFO : Ignition finished successfully Dec 13 23:06:32.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:32.747593 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 23:06:32.751804 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 23:06:33.515072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 23:06:33.546178 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1006) Dec 13 23:06:33.548430 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:06:33.548453 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:06:33.553282 kernel: BTRFS info (device vda6): turning on async discard Dec 13 23:06:33.553347 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 23:06:33.554669 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 23:06:33.587634 ignition[1023]: INFO : Ignition 2.24.0 Dec 13 23:06:33.587634 ignition[1023]: INFO : Stage: files Dec 13 23:06:33.589230 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 23:06:33.589230 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:06:33.589230 ignition[1023]: DEBUG : files: compiled without relabeling support, skipping Dec 13 23:06:33.592586 ignition[1023]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 23:06:33.592586 ignition[1023]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 23:06:33.596477 ignition[1023]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 23:06:33.597704 ignition[1023]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 23:06:33.597704 ignition[1023]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 23:06:33.597631 unknown[1023]: wrote ssh authorized keys file for user: core Dec 13 23:06:33.616411 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 13 23:06:33.616411 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 13 23:06:33.676884 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 23:06:33.832833 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 13 23:06:33.834773 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 23:06:33.836679 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 23:06:33.848774 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 23:06:33.848774 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 23:06:33.848774 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 13 23:06:33.848774 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 13 23:06:33.848774 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 13 23:06:33.848774 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 13 23:06:34.231188 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 23:06:34.560507 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 13 23:06:34.560507 ignition[1023]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 23:06:34.564078 ignition[1023]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 23:06:34.566950 ignition[1023]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 23:06:34.566950 ignition[1023]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 23:06:34.566950 ignition[1023]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 23:06:34.566950 ignition[1023]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 23:06:34.573581 ignition[1023]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 23:06:34.573581 ignition[1023]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 23:06:34.573581 ignition[1023]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 23:06:34.584650 ignition[1023]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 23:06:34.589108 ignition[1023]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 23:06:34.591195 ignition[1023]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 23:06:34.591195 ignition[1023]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 23:06:34.591195 ignition[1023]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 23:06:34.591195 ignition[1023]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 23:06:34.591195 ignition[1023]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 23:06:34.591195 ignition[1023]: INFO : files: files passed Dec 13 23:06:34.591195 ignition[1023]: INFO : Ignition finished successfully Dec 13 23:06:34.606598 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 23:06:34.606633 kernel: audit: type=1130 audit(1765667194.595:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.593004 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 23:06:34.597248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 23:06:34.602265 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 23:06:34.616022 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 23:06:34.622428 kernel: audit: type=1130 audit(1765667194.617:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.622456 kernel: audit: type=1131 audit(1765667194.617:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.616170 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 23:06:34.623579 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 23:06:34.625626 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 23:06:34.625626 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 23:06:34.628832 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 23:06:34.631440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 23:06:34.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.633872 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 23:06:34.638335 kernel: audit: type=1130 audit(1765667194.633:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.638283 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 23:06:34.709382 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 23:06:34.710197 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 23:06:34.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.712391 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 23:06:34.718259 kernel: audit: type=1130 audit(1765667194.711:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.718295 kernel: audit: type=1131 audit(1765667194.712:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.717516 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 23:06:34.719306 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 23:06:34.720257 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 23:06:34.752133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 23:06:34.756647 kernel: audit: type=1130 audit(1765667194.753:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.754660 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 23:06:34.785795 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 23:06:34.786013 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 23:06:34.788084 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 23:06:34.790076 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 23:06:34.791792 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 23:06:34.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.791934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 23:06:34.799445 kernel: audit: type=1131 audit(1765667194.793:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.796492 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 23:06:34.800360 systemd[1]: Stopped target basic.target - Basic System. Dec 13 23:06:34.801867 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 23:06:34.803487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 23:06:34.805238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 23:06:34.807194 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 23:06:34.809068 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 23:06:34.810843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 23:06:34.812691 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 23:06:34.814514 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 23:06:34.816063 systemd[1]: Stopped target swap.target - Swaps. Dec 13 23:06:34.817535 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 23:06:34.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.817673 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 23:06:34.822984 kernel: audit: type=1131 audit(1765667194.819:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.822172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 23:06:34.824047 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 23:06:34.825908 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 23:06:34.829172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 23:06:34.830340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 23:06:34.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.835173 kernel: audit: type=1131 audit(1765667194.832:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.830482 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 23:06:34.835212 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 23:06:34.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.835355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 23:06:34.837290 systemd[1]: Stopped target paths.target - Path Units. Dec 13 23:06:34.838795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 23:06:34.842176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 23:06:34.843372 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 23:06:34.845305 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 23:06:34.846773 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 23:06:34.846868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 23:06:34.848273 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 23:06:34.848353 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 23:06:34.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.849755 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 23:06:34.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.849823 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 23:06:34.851407 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 23:06:34.851520 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 23:06:34.853143 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 23:06:34.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.853256 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 23:06:34.855689 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 23:06:34.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.857250 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 23:06:34.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.857378 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 23:06:34.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.860183 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 23:06:34.861775 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 23:06:34.861905 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 23:06:34.863715 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 23:06:34.863819 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 23:06:34.865434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 23:06:34.865538 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 23:06:34.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.872863 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 23:06:34.874143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 23:06:34.880640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 23:06:34.883710 ignition[1080]: INFO : Ignition 2.24.0 Dec 13 23:06:34.883710 ignition[1080]: INFO : Stage: umount Dec 13 23:06:34.885324 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 23:06:34.885324 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:06:34.885324 ignition[1080]: INFO : umount: umount passed Dec 13 23:06:34.885324 ignition[1080]: INFO : Ignition finished successfully Dec 13 23:06:34.886501 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 23:06:34.889871 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 23:06:34.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.891703 systemd[1]: Stopped target network.target - Network. Dec 13 23:06:34.892943 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 23:06:34.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.893016 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 23:06:34.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.894480 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 23:06:34.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.894535 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 23:06:34.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.897068 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 23:06:34.897144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 23:06:34.899279 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 23:06:34.899335 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 23:06:34.901031 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 23:06:34.902769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 23:06:34.909964 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 23:06:34.910097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 23:06:34.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.913766 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 23:06:34.913866 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 23:06:34.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.916000 audit: BPF prog-id=6 op=UNLOAD Dec 13 23:06:34.916762 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 23:06:34.917666 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 23:06:34.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.921417 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 23:06:34.921000 audit: BPF prog-id=9 op=UNLOAD Dec 13 23:06:34.922510 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 23:06:34.922552 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 23:06:34.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.924223 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 23:06:34.924283 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 23:06:34.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.926762 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 23:06:34.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.927650 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 23:06:34.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.927715 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 23:06:34.929672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 23:06:34.929725 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 23:06:34.931338 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 23:06:34.931382 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 23:06:34.933034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 23:06:34.943159 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 23:06:34.943332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 23:06:34.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.946045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 23:06:34.946089 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 23:06:34.947856 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 23:06:34.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.947890 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 23:06:34.949498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 23:06:34.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.949552 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 23:06:34.952255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 23:06:34.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.952316 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 23:06:34.954890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 23:06:34.954948 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 23:06:34.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.958563 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 23:06:34.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.959583 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 23:06:34.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.959652 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 23:06:34.961589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 23:06:34.961640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 23:06:34.963604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 23:06:34.963652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:06:34.984319 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 23:06:34.984556 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 23:06:34.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.986609 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 23:06:34.988170 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 23:06:34.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:34.989701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 23:06:34.991801 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 23:06:35.024745 systemd[1]: Switching root. Dec 13 23:06:35.060470 systemd-journald[347]: Journal stopped Dec 13 23:06:35.922931 systemd-journald[347]: Received SIGTERM from PID 1 (systemd). Dec 13 23:06:35.922982 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 23:06:35.923000 kernel: SELinux: policy capability open_perms=1 Dec 13 23:06:35.923040 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 23:06:35.923055 kernel: SELinux: policy capability always_check_network=0 Dec 13 23:06:35.923065 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 23:06:35.923075 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 23:06:35.923088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 23:06:35.923099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 23:06:35.923184 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 23:06:35.923205 systemd[1]: Successfully loaded SELinux policy in 50.306ms. Dec 13 23:06:35.923224 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.731ms. Dec 13 23:06:35.923237 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 23:06:35.923249 systemd[1]: Detected virtualization kvm. Dec 13 23:06:35.923259 systemd[1]: Detected architecture arm64. Dec 13 23:06:35.923270 systemd[1]: Detected first boot. Dec 13 23:06:35.923281 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 23:06:35.923292 zram_generator::config[1124]: No configuration found. Dec 13 23:06:35.923310 kernel: NET: Registered PF_VSOCK protocol family Dec 13 23:06:35.923320 systemd[1]: Populated /etc with preset unit settings. Dec 13 23:06:35.923331 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 23:06:35.923342 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 23:06:35.923354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 23:06:35.923365 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 23:06:35.923377 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 23:06:35.923388 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 23:06:35.923399 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 23:06:35.923410 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 23:06:35.923421 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 23:06:35.923431 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 23:06:35.923442 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 23:06:35.923454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 23:06:35.923465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 23:06:35.923476 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 23:06:35.923491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 23:06:35.923502 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 23:06:35.923512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 23:06:35.923523 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 23:06:35.923536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 23:06:35.923547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 23:06:35.923558 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 23:06:35.923569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 23:06:35.923579 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 23:06:35.923590 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 23:06:35.923603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 23:06:35.923613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 23:06:35.923624 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 23:06:35.923634 systemd[1]: Reached target slices.target - Slice Units. Dec 13 23:06:35.923645 systemd[1]: Reached target swap.target - Swaps. Dec 13 23:06:35.923656 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 23:06:35.923666 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 23:06:35.923678 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 23:06:35.923688 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 23:06:35.923699 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 23:06:35.923710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 23:06:35.923721 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 23:06:35.923731 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 23:06:35.923742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 23:06:35.923754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 23:06:35.923765 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 23:06:35.923775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 23:06:35.923786 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 23:06:35.923796 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 23:06:35.923807 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 23:06:35.923818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 23:06:35.923829 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 23:06:35.923840 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 23:06:35.923851 systemd[1]: Reached target machines.target - Containers. Dec 13 23:06:35.923862 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 23:06:35.923873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:06:35.923884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 23:06:35.923895 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 23:06:35.923906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:06:35.923917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 23:06:35.923928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 23:06:35.923940 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 23:06:35.923950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:06:35.923962 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 23:06:35.923973 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 23:06:35.923985 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 23:06:35.923995 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 23:06:35.924006 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 23:06:35.924017 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:06:35.924028 kernel: fuse: init (API version 7.41) Dec 13 23:06:35.924039 kernel: ACPI: bus type drm_connector registered Dec 13 23:06:35.924051 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 23:06:35.924062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 23:06:35.924073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 23:06:35.924085 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 23:06:35.924098 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 23:06:35.924119 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 23:06:35.924137 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 23:06:35.924148 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 23:06:35.924159 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 23:06:35.924200 systemd-journald[1194]: Collecting audit messages is enabled. Dec 13 23:06:35.924226 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 23:06:35.924238 systemd-journald[1194]: Journal started Dec 13 23:06:35.924259 systemd-journald[1194]: Runtime Journal (/run/log/journal/445a64f773174dd39b68e52d49081e46) is 6M, max 48.5M, 42.4M free. Dec 13 23:06:35.924300 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 23:06:35.772000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 23:06:35.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.879000 audit: BPF prog-id=14 op=UNLOAD Dec 13 23:06:35.879000 audit: BPF prog-id=13 op=UNLOAD Dec 13 23:06:35.880000 audit: BPF prog-id=15 op=LOAD Dec 13 23:06:35.880000 audit: BPF prog-id=16 op=LOAD Dec 13 23:06:35.880000 audit: BPF prog-id=17 op=LOAD Dec 13 23:06:35.921000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 23:06:35.921000 audit[1194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffda554ca0 a2=4000 a3=0 items=0 ppid=1 pid=1194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:35.921000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 23:06:35.672796 systemd[1]: Queued start job for default target multi-user.target. Dec 13 23:06:35.698334 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 23:06:35.698806 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 23:06:35.928237 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 23:06:35.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.929278 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 23:06:35.932152 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 23:06:35.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.933552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 23:06:35.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.936484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 23:06:35.936675 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 23:06:35.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.938338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:06:35.938522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:06:35.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.939912 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 23:06:35.940080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 23:06:35.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.941409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 23:06:35.941580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 23:06:35.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.943158 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 23:06:35.943347 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 23:06:35.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.944616 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:06:35.944780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:06:35.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.947594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 23:06:35.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.949392 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 23:06:35.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.951519 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 23:06:35.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.953440 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 23:06:35.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.967519 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 23:06:35.969086 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 23:06:35.971446 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 23:06:35.973666 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 23:06:35.974793 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 23:06:35.974837 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 23:06:35.976918 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 23:06:35.978692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:06:35.978821 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:06:35.981044 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 23:06:35.983244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 23:06:35.984302 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 23:06:35.985351 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 23:06:35.986496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 23:06:35.988297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 23:06:35.990709 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 23:06:35.991523 systemd-journald[1194]: Time spent on flushing to /var/log/journal/445a64f773174dd39b68e52d49081e46 is 18.401ms for 998 entries. Dec 13 23:06:35.991523 systemd-journald[1194]: System Journal (/var/log/journal/445a64f773174dd39b68e52d49081e46) is 8M, max 163.5M, 155.5M free. Dec 13 23:06:36.016469 systemd-journald[1194]: Received client request to flush runtime journal. Dec 13 23:06:36.016595 kernel: loop1: detected capacity change from 0 to 211168 Dec 13 23:06:35.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:35.995786 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 23:06:35.997797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 23:06:36.000084 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 23:06:36.003999 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 23:06:36.005778 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 23:06:36.009281 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 23:06:36.014321 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 23:06:36.018236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 23:06:36.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.027289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 23:06:36.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.037131 kernel: loop2: detected capacity change from 0 to 353272 Dec 13 23:06:36.039128 kernel: loop2: p1 p2 p3 Dec 13 23:06:36.041972 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 23:06:36.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.045799 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 23:06:36.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.048000 audit: BPF prog-id=18 op=LOAD Dec 13 23:06:36.048000 audit: BPF prog-id=19 op=LOAD Dec 13 23:06:36.048000 audit: BPF prog-id=20 op=LOAD Dec 13 23:06:36.050276 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 23:06:36.052000 audit: BPF prog-id=21 op=LOAD Dec 13 23:06:36.052841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 23:06:36.055297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 23:06:36.057126 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 13 23:06:36.062000 audit: BPF prog-id=22 op=LOAD Dec 13 23:06:36.062000 audit: BPF prog-id=23 op=LOAD Dec 13 23:06:36.062000 audit: BPF prog-id=24 op=LOAD Dec 13 23:06:36.063667 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 23:06:36.065000 audit: BPF prog-id=25 op=LOAD Dec 13 23:06:36.065000 audit: BPF prog-id=26 op=LOAD Dec 13 23:06:36.065000 audit: BPF prog-id=27 op=LOAD Dec 13 23:06:36.066188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 23:06:36.077744 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 23:06:36.077762 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 23:06:36.080129 kernel: loop3: detected capacity change from 0 to 161080 Dec 13 23:06:36.081539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 23:06:36.083127 kernel: loop3: p1 p2 p3 Dec 13 23:06:36.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.096134 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 23:06:36.096587 systemd-nsresourced[1263]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 23:06:36.097917 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 23:06:36.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.099693 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 23:06:36.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.112201 kernel: loop4: detected capacity change from 0 to 211168 Dec 13 23:06:36.120125 kernel: loop5: detected capacity change from 0 to 353272 Dec 13 23:06:36.120206 kernel: loop5: p1 p2 p3 Dec 13 23:06:36.134862 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:06:36.134948 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 23:06:36.134971 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 23:06:36.135686 kernel: device-mapper: ioctl: error adding target to table Dec 13 23:06:36.135769 (sd-merge)[1280]: device-mapper: reload ioctl on b35b2492fcca387995ac7cc700425775891a7db9ed46359c680e82ec44f4021d-verity (253:1) failed: Invalid argument Dec 13 23:06:36.144129 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:06:36.149554 systemd-oomd[1260]: No swap; memory pressure usage will be degraded Dec 13 23:06:36.150182 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 23:06:36.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.156233 systemd-resolved[1261]: Positive Trust Anchors: Dec 13 23:06:36.156250 systemd-resolved[1261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 23:06:36.156253 systemd-resolved[1261]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 23:06:36.156284 systemd-resolved[1261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 23:06:36.159802 systemd-resolved[1261]: Defaulting to hostname 'linux'. Dec 13 23:06:36.161398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 23:06:36.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.162523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 23:06:36.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.419000 audit: BPF prog-id=8 op=UNLOAD Dec 13 23:06:36.419000 audit: BPF prog-id=7 op=UNLOAD Dec 13 23:06:36.420000 audit: BPF prog-id=28 op=LOAD Dec 13 23:06:36.420000 audit: BPF prog-id=29 op=LOAD Dec 13 23:06:36.418236 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 23:06:36.421189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 23:06:36.456550 systemd-udevd[1287]: Using default interface naming scheme 'v257'. Dec 13 23:06:36.472997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 23:06:36.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.475000 audit: BPF prog-id=30 op=LOAD Dec 13 23:06:36.476734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 23:06:36.537425 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 23:06:36.538979 systemd-networkd[1296]: lo: Link UP Dec 13 23:06:36.538987 systemd-networkd[1296]: lo: Gained carrier Dec 13 23:06:36.539768 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 23:06:36.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.543199 systemd[1]: Reached target network.target - Network. Dec 13 23:06:36.546368 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 23:06:36.548849 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 23:06:36.565480 systemd-networkd[1296]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:06:36.565492 systemd-networkd[1296]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 23:06:36.566725 systemd-networkd[1296]: eth0: Link UP Dec 13 23:06:36.567231 systemd-networkd[1296]: eth0: Gained carrier Dec 13 23:06:36.567251 systemd-networkd[1296]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:06:36.572818 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 23:06:36.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.577185 systemd-networkd[1296]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 23:06:36.594122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 23:06:36.596586 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 23:06:36.624530 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 23:06:36.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.647325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:06:36.668130 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 23:06:36.670136 kernel: loop6: detected capacity change from 0 to 161080 Dec 13 23:06:36.671247 kernel: loop6: p1 p2 p3 Dec 13 23:06:36.679921 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:06:36.679999 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 23:06:36.682809 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 23:06:36.682850 kernel: device-mapper: ioctl: error adding target to table Dec 13 23:06:36.682837 (sd-merge)[1280]: device-mapper: reload ioctl on cf827620bc7ad537f83bb2a823378974b3cc077c207d7b04c642a58e7bc0ec99-verity (253:2) failed: Invalid argument Dec 13 23:06:36.686141 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:06:36.703134 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 23:06:36.703230 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 23:06:36.703260 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 23:06:36.706029 (sd-merge)[1280]: Merged extensions into '/usr'. Dec 13 23:06:36.707277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:06:36.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:36.710942 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 23:06:36.710957 systemd[1]: Reloading... Dec 13 23:06:36.768143 zram_generator::config[1384]: No configuration found. Dec 13 23:06:36.951242 systemd[1]: Reloading finished in 239 ms. Dec 13 23:06:36.984360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 23:06:36.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.004454 systemd[1]: Starting ensure-sysext.service... Dec 13 23:06:37.006372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 23:06:37.008000 audit: BPF prog-id=31 op=LOAD Dec 13 23:06:37.008000 audit: BPF prog-id=18 op=UNLOAD Dec 13 23:06:37.008000 audit: BPF prog-id=32 op=LOAD Dec 13 23:06:37.008000 audit: BPF prog-id=33 op=LOAD Dec 13 23:06:37.008000 audit: BPF prog-id=19 op=UNLOAD Dec 13 23:06:37.008000 audit: BPF prog-id=20 op=UNLOAD Dec 13 23:06:37.009000 audit: BPF prog-id=34 op=LOAD Dec 13 23:06:37.009000 audit: BPF prog-id=30 op=UNLOAD Dec 13 23:06:37.009000 audit: BPF prog-id=35 op=LOAD Dec 13 23:06:37.009000 audit: BPF prog-id=36 op=LOAD Dec 13 23:06:37.009000 audit: BPF prog-id=28 op=UNLOAD Dec 13 23:06:37.009000 audit: BPF prog-id=29 op=UNLOAD Dec 13 23:06:37.010000 audit: BPF prog-id=37 op=LOAD Dec 13 23:06:37.010000 audit: BPF prog-id=22 op=UNLOAD Dec 13 23:06:37.010000 audit: BPF prog-id=38 op=LOAD Dec 13 23:06:37.010000 audit: BPF prog-id=39 op=LOAD Dec 13 23:06:37.010000 audit: BPF prog-id=23 op=UNLOAD Dec 13 23:06:37.010000 audit: BPF prog-id=24 op=UNLOAD Dec 13 23:06:37.011000 audit: BPF prog-id=40 op=LOAD Dec 13 23:06:37.011000 audit: BPF prog-id=21 op=UNLOAD Dec 13 23:06:37.011000 audit: BPF prog-id=41 op=LOAD Dec 13 23:06:37.011000 audit: BPF prog-id=25 op=UNLOAD Dec 13 23:06:37.011000 audit: BPF prog-id=42 op=LOAD Dec 13 23:06:37.011000 audit: BPF prog-id=43 op=LOAD Dec 13 23:06:37.011000 audit: BPF prog-id=26 op=UNLOAD Dec 13 23:06:37.011000 audit: BPF prog-id=27 op=UNLOAD Dec 13 23:06:37.012000 audit: BPF prog-id=44 op=LOAD Dec 13 23:06:37.012000 audit: BPF prog-id=15 op=UNLOAD Dec 13 23:06:37.012000 audit: BPF prog-id=45 op=LOAD Dec 13 23:06:37.012000 audit: BPF prog-id=46 op=LOAD Dec 13 23:06:37.012000 audit: BPF prog-id=16 op=UNLOAD Dec 13 23:06:37.012000 audit: BPF prog-id=17 op=UNLOAD Dec 13 23:06:37.017569 systemd[1]: Reload requested from client PID 1417 ('systemctl') (unit ensure-sysext.service)... Dec 13 23:06:37.017587 systemd[1]: Reloading... Dec 13 23:06:37.020572 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 23:06:37.020891 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 23:06:37.021450 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 23:06:37.022532 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Dec 13 23:06:37.022688 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Dec 13 23:06:37.028722 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 23:06:37.028732 systemd-tmpfiles[1418]: Skipping /boot Dec 13 23:06:37.035270 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 23:06:37.035363 systemd-tmpfiles[1418]: Skipping /boot Dec 13 23:06:37.076203 zram_generator::config[1454]: No configuration found. Dec 13 23:06:37.254391 systemd[1]: Reloading finished in 236 ms. Dec 13 23:06:37.279000 audit: BPF prog-id=47 op=LOAD Dec 13 23:06:37.279000 audit: BPF prog-id=48 op=LOAD Dec 13 23:06:37.279000 audit: BPF prog-id=35 op=UNLOAD Dec 13 23:06:37.279000 audit: BPF prog-id=36 op=UNLOAD Dec 13 23:06:37.280000 audit: BPF prog-id=49 op=LOAD Dec 13 23:06:37.280000 audit: BPF prog-id=41 op=UNLOAD Dec 13 23:06:37.280000 audit: BPF prog-id=50 op=LOAD Dec 13 23:06:37.280000 audit: BPF prog-id=51 op=LOAD Dec 13 23:06:37.280000 audit: BPF prog-id=42 op=UNLOAD Dec 13 23:06:37.280000 audit: BPF prog-id=43 op=UNLOAD Dec 13 23:06:37.281000 audit: BPF prog-id=52 op=LOAD Dec 13 23:06:37.281000 audit: BPF prog-id=40 op=UNLOAD Dec 13 23:06:37.282000 audit: BPF prog-id=53 op=LOAD Dec 13 23:06:37.282000 audit: BPF prog-id=31 op=UNLOAD Dec 13 23:06:37.283000 audit: BPF prog-id=54 op=LOAD Dec 13 23:06:37.283000 audit: BPF prog-id=55 op=LOAD Dec 13 23:06:37.290000 audit: BPF prog-id=32 op=UNLOAD Dec 13 23:06:37.290000 audit: BPF prog-id=33 op=UNLOAD Dec 13 23:06:37.296000 audit: BPF prog-id=56 op=LOAD Dec 13 23:06:37.297000 audit: BPF prog-id=34 op=UNLOAD Dec 13 23:06:37.297000 audit: BPF prog-id=57 op=LOAD Dec 13 23:06:37.297000 audit: BPF prog-id=44 op=UNLOAD Dec 13 23:06:37.297000 audit: BPF prog-id=58 op=LOAD Dec 13 23:06:37.298000 audit: BPF prog-id=59 op=LOAD Dec 13 23:06:37.298000 audit: BPF prog-id=45 op=UNLOAD Dec 13 23:06:37.298000 audit: BPF prog-id=46 op=UNLOAD Dec 13 23:06:37.298000 audit: BPF prog-id=60 op=LOAD Dec 13 23:06:37.298000 audit: BPF prog-id=37 op=UNLOAD Dec 13 23:06:37.299000 audit: BPF prog-id=61 op=LOAD Dec 13 23:06:37.299000 audit: BPF prog-id=62 op=LOAD Dec 13 23:06:37.299000 audit: BPF prog-id=38 op=UNLOAD Dec 13 23:06:37.299000 audit: BPF prog-id=39 op=UNLOAD Dec 13 23:06:37.301685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 23:06:37.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.312997 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 23:06:37.315945 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 23:06:37.327219 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 23:06:37.329429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 23:06:37.331621 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 23:06:37.335634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:06:37.344372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:06:37.346766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 23:06:37.350942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:06:37.350000 audit[1492]: SYSTEM_BOOT pid=1492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.352416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:06:37.352591 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:06:37.352683 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:06:37.356554 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:06:37.356974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:06:37.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.360572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 23:06:37.361354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 23:06:37.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.364955 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 23:06:37.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.366999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:06:37.367215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:06:37.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.373785 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 23:06:37.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:37.375000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 23:06:37.375000 audit[1518]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc4fae230 a2=420 a3=0 items=0 ppid=1487 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:37.375000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 23:06:37.375842 augenrules[1518]: No rules Dec 13 23:06:37.376824 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 23:06:37.377060 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 23:06:37.382989 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 23:06:37.384203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:06:37.386410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:06:37.398721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 23:06:37.402326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 23:06:37.405681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:06:37.406824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:06:37.406993 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:06:37.407087 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:06:37.409867 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 23:06:37.411588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:06:37.413182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:06:37.414525 augenrules[1525]: /sbin/augenrules: No change Dec 13 23:06:37.415094 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 23:06:37.416797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 23:06:37.418691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 23:06:37.418862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 23:06:37.420441 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:06:37.420612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:06:37.421000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 23:06:37.421000 audit[1546]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff6393040 a2=420 a3=0 items=0 ppid=1525 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:37.421000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 23:06:37.421000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 23:06:37.421000 audit[1546]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff63954c0 a2=420 a3=0 items=0 ppid=1525 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:37.421000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 23:06:37.422014 augenrules[1546]: No rules Dec 13 23:06:37.422922 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 23:06:37.423257 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 23:06:37.426761 systemd[1]: Finished ensure-sysext.service. Dec 13 23:06:37.431907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 23:06:37.431971 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 23:06:37.433572 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 23:06:37.434915 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 23:06:37.487812 systemd-timesyncd[1558]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 23:06:37.487856 systemd-timesyncd[1558]: Initial clock synchronization to Sat 2025-12-13 23:06:37.771880 UTC. Dec 13 23:06:37.487938 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 23:06:37.489479 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 23:06:37.594396 ldconfig[1489]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 23:06:37.671097 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 23:06:37.673622 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 23:06:37.703282 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 23:06:37.704589 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 23:06:37.705704 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 23:06:37.706933 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 23:06:37.708437 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 23:06:37.709522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 23:06:37.710710 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 23:06:37.711942 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 23:06:37.712973 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 23:06:37.714153 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 23:06:37.714197 systemd[1]: Reached target paths.target - Path Units. Dec 13 23:06:37.714972 systemd[1]: Reached target timers.target - Timer Units. Dec 13 23:06:37.716608 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 23:06:37.719140 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 23:06:37.721916 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 23:06:37.723310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 23:06:37.724525 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 23:06:37.733094 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 23:06:37.734434 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 23:06:37.736133 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 23:06:37.737234 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 23:06:37.738122 systemd[1]: Reached target basic.target - Basic System. Dec 13 23:06:37.738998 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 23:06:37.739031 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 23:06:37.740056 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 23:06:37.742164 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 23:06:37.743980 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 23:06:37.746029 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 23:06:37.748041 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 23:06:37.749160 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 23:06:37.750758 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 23:06:37.753974 jq[1571]: false Dec 13 23:06:37.754052 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 23:06:37.756331 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 23:06:37.760312 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 23:06:37.763636 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 23:06:37.764674 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 23:06:37.765226 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 23:06:37.765764 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 23:06:37.766048 extend-filesystems[1572]: Found /dev/vda6 Dec 13 23:06:37.767587 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 23:06:37.773913 extend-filesystems[1572]: Found /dev/vda9 Dec 13 23:06:37.775824 extend-filesystems[1572]: Checking size of /dev/vda9 Dec 13 23:06:37.777344 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 23:06:37.781081 jq[1588]: true Dec 13 23:06:37.781411 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 23:06:37.781695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 23:06:37.782002 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 23:06:37.782626 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 23:06:37.786445 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 23:06:37.786631 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 23:06:37.799082 extend-filesystems[1572]: Resized partition /dev/vda9 Dec 13 23:06:37.810940 jq[1601]: true Dec 13 23:06:37.811717 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 23:06:37.826122 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 23:06:37.829921 update_engine[1586]: I20251213 23:06:37.829699 1586 main.cc:92] Flatcar Update Engine starting Dec 13 23:06:37.833648 tar[1600]: linux-arm64/LICENSE Dec 13 23:06:37.835072 tar[1600]: linux-arm64/helm Dec 13 23:06:37.836446 dbus-daemon[1569]: [system] SELinux support is enabled Dec 13 23:06:37.839393 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 23:06:37.843737 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 23:06:37.843788 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 23:06:37.846243 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 23:06:37.846267 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 23:06:37.851479 systemd[1]: Started update-engine.service - Update Engine. Dec 13 23:06:37.853738 update_engine[1586]: I20251213 23:06:37.853681 1586 update_check_scheduler.cc:74] Next update check in 10m31s Dec 13 23:06:37.854334 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 23:06:37.869130 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 23:06:37.871394 systemd-logind[1585]: New seat seat0. Dec 13 23:06:37.874075 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 23:06:37.877128 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 23:06:37.891437 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 23:06:37.891437 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 23:06:37.891437 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 23:06:37.895837 extend-filesystems[1572]: Resized filesystem in /dev/vda9 Dec 13 23:06:37.897177 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 23:06:37.897582 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 23:06:37.905411 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Dec 13 23:06:37.909502 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 23:06:37.917280 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 23:06:37.946726 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 23:06:37.991979 containerd[1603]: time="2025-12-13T23:06:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 23:06:37.994065 containerd[1603]: time="2025-12-13T23:06:37.994024200Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 23:06:38.007999 containerd[1603]: time="2025-12-13T23:06:38.007951828Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.05µs" Dec 13 23:06:38.007999 containerd[1603]: time="2025-12-13T23:06:38.007991385Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 23:06:38.008095 containerd[1603]: time="2025-12-13T23:06:38.008036202Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 23:06:38.008095 containerd[1603]: time="2025-12-13T23:06:38.008049374Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 23:06:38.008241 containerd[1603]: time="2025-12-13T23:06:38.008215098Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 23:06:38.008241 containerd[1603]: time="2025-12-13T23:06:38.008237797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008331 containerd[1603]: time="2025-12-13T23:06:38.008307963Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008331 containerd[1603]: time="2025-12-13T23:06:38.008326437Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008633 containerd[1603]: time="2025-12-13T23:06:38.008609298Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008633 containerd[1603]: time="2025-12-13T23:06:38.008631624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008754 containerd[1603]: time="2025-12-13T23:06:38.008700714Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008754 containerd[1603]: time="2025-12-13T23:06:38.008710406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008929 containerd[1603]: time="2025-12-13T23:06:38.008878699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 23:06:38.008963 containerd[1603]: time="2025-12-13T23:06:38.008946256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 23:06:38.009135 containerd[1603]: time="2025-12-13T23:06:38.009101873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 23:06:38.009166 containerd[1603]: time="2025-12-13T23:06:38.009155720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 23:06:38.009186 containerd[1603]: time="2025-12-13T23:06:38.009167028Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 23:06:38.009599 containerd[1603]: time="2025-12-13T23:06:38.009536790Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 23:06:38.010291 containerd[1603]: time="2025-12-13T23:06:38.010260409Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 23:06:38.010541 containerd[1603]: time="2025-12-13T23:06:38.010467844Z" level=info msg="metadata content store policy set" policy=shared Dec 13 23:06:38.018234 containerd[1603]: time="2025-12-13T23:06:38.018196816Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 23:06:38.018442 containerd[1603]: time="2025-12-13T23:06:38.018422310Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 23:06:38.018878 containerd[1603]: time="2025-12-13T23:06:38.018792362Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.018978092Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019002779Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019023282Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019035377Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019045939Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019061596Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019077129Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019089058Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019101692Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019111260Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019124929Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019287049Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019310245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 23:06:38.019562 containerd[1603]: time="2025-12-13T23:06:38.019324742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019335097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019345411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019354772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019372293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019389400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019400542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019411560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019422868Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 23:06:38.019830 containerd[1603]: time="2025-12-13T23:06:38.019449708Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 23:06:38.020651 containerd[1603]: time="2025-12-13T23:06:38.020584759Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 23:06:38.020831 containerd[1603]: time="2025-12-13T23:06:38.020813732Z" level=info msg="Start snapshots syncer" Dec 13 23:06:38.027484 containerd[1603]: time="2025-12-13T23:06:38.027434328Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 23:06:38.027808 containerd[1603]: time="2025-12-13T23:06:38.027770250Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 23:06:38.027924 containerd[1603]: time="2025-12-13T23:06:38.027824345Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 23:06:38.028426 containerd[1603]: time="2025-12-13T23:06:38.028374743Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 23:06:38.028824 containerd[1603]: time="2025-12-13T23:06:38.028797441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 23:06:38.028845 containerd[1603]: time="2025-12-13T23:06:38.028837246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 23:06:38.028864 containerd[1603]: time="2025-12-13T23:06:38.028850170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 23:06:38.028885 containerd[1603]: time="2025-12-13T23:06:38.028862720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 23:06:38.028885 containerd[1603]: time="2025-12-13T23:06:38.028879495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 23:06:38.028929 containerd[1603]: time="2025-12-13T23:06:38.028889933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 23:06:38.028929 containerd[1603]: time="2025-12-13T23:06:38.028901034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 23:06:38.028929 containerd[1603]: time="2025-12-13T23:06:38.028911638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 23:06:38.028983 containerd[1603]: time="2025-12-13T23:06:38.028932928Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 23:06:38.029001 containerd[1603]: time="2025-12-13T23:06:38.028984580Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 23:06:38.029023 containerd[1603]: time="2025-12-13T23:06:38.028999491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 23:06:38.029023 containerd[1603]: time="2025-12-13T23:06:38.029009018Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 23:06:38.029057 containerd[1603]: time="2025-12-13T23:06:38.029020699Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 23:06:38.029257 containerd[1603]: time="2025-12-13T23:06:38.029029273Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 23:06:38.029280 containerd[1603]: time="2025-12-13T23:06:38.029260069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 23:06:38.029280 containerd[1603]: time="2025-12-13T23:06:38.029274110Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 23:06:38.029358 containerd[1603]: time="2025-12-13T23:06:38.029347922Z" level=info msg="runtime interface created" Dec 13 23:06:38.029384 containerd[1603]: time="2025-12-13T23:06:38.029356372Z" level=info msg="created NRI interface" Dec 13 23:06:38.029384 containerd[1603]: time="2025-12-13T23:06:38.029369129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 23:06:38.029384 containerd[1603]: time="2025-12-13T23:06:38.029382260Z" level=info msg="Connect containerd service" Dec 13 23:06:38.029434 containerd[1603]: time="2025-12-13T23:06:38.029405041Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 23:06:38.030901 containerd[1603]: time="2025-12-13T23:06:38.030869469Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 23:06:38.121043 containerd[1603]: time="2025-12-13T23:06:38.120874173Z" level=info msg="Start subscribing containerd event" Dec 13 23:06:38.121166 containerd[1603]: time="2025-12-13T23:06:38.120998767Z" level=info msg="Start recovering state" Dec 13 23:06:38.122039 containerd[1603]: time="2025-12-13T23:06:38.122002969Z" level=info msg="Start event monitor" Dec 13 23:06:38.122243 containerd[1603]: time="2025-12-13T23:06:38.122224818Z" level=info msg="Start cni network conf syncer for default" Dec 13 23:06:38.122274 containerd[1603]: time="2025-12-13T23:06:38.122248180Z" level=info msg="Start streaming server" Dec 13 23:06:38.122527 containerd[1603]: time="2025-12-13T23:06:38.122492769Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 23:06:38.122527 containerd[1603]: time="2025-12-13T23:06:38.122518657Z" level=info msg="runtime interface starting up..." Dec 13 23:06:38.122527 containerd[1603]: time="2025-12-13T23:06:38.122526237Z" level=info msg="starting plugins..." Dec 13 23:06:38.122598 containerd[1603]: time="2025-12-13T23:06:38.122545207Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 23:06:38.123814 containerd[1603]: time="2025-12-13T23:06:38.123203423Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 23:06:38.124168 containerd[1603]: time="2025-12-13T23:06:38.124145578Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 23:06:38.125624 containerd[1603]: time="2025-12-13T23:06:38.124540482Z" level=info msg="containerd successfully booted in 0.133009s" Dec 13 23:06:38.124720 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 23:06:38.196579 tar[1600]: linux-arm64/README.md Dec 13 23:06:38.214614 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 23:06:38.319285 systemd-networkd[1296]: eth0: Gained IPv6LL Dec 13 23:06:38.325681 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 23:06:38.327445 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 23:06:38.329869 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 23:06:38.332371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:06:38.336368 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 23:06:38.357572 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 23:06:38.359214 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 23:06:38.361795 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 23:06:38.364717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 23:06:38.533993 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 23:06:38.554281 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 23:06:38.557836 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 23:06:38.579356 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 23:06:38.579636 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 23:06:38.583606 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 23:06:38.601205 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 23:06:38.604913 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 23:06:38.607448 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 23:06:38.608786 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 23:06:38.915809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:06:38.917851 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 23:06:38.919138 systemd[1]: Startup finished in 1.472s (kernel) + 5.260s (initrd) + 3.690s (userspace) = 10.423s. Dec 13 23:06:38.930473 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 23:06:39.293389 kubelet[1711]: E1213 23:06:39.293313 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 23:06:39.295546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 23:06:39.295680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 23:06:39.296037 systemd[1]: kubelet.service: Consumed 756ms CPU time, 258.5M memory peak. Dec 13 23:06:41.294896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 23:06:41.295987 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:33396.service - OpenSSH per-connection server daemon (10.0.0.1:33396). Dec 13 23:06:41.381181 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 33396 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:41.382839 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:41.388757 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 23:06:41.389599 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 23:06:41.393964 systemd-logind[1585]: New session 1 of user core. Dec 13 23:06:41.413038 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 23:06:41.417397 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 23:06:41.429905 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:41.432484 systemd-logind[1585]: New session 2 of user core. Dec 13 23:06:41.552676 systemd[1730]: Queued start job for default target default.target. Dec 13 23:06:41.577048 systemd[1730]: Created slice app.slice - User Application Slice. Dec 13 23:06:41.577080 systemd[1730]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 23:06:41.577092 systemd[1730]: Reached target paths.target - Paths. Dec 13 23:06:41.577172 systemd[1730]: Reached target timers.target - Timers. Dec 13 23:06:41.578359 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 23:06:41.579115 systemd[1730]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 23:06:41.588142 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 23:06:41.588201 systemd[1730]: Reached target sockets.target - Sockets. Dec 13 23:06:41.588635 systemd[1730]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 23:06:41.588692 systemd[1730]: Reached target basic.target - Basic System. Dec 13 23:06:41.588731 systemd[1730]: Reached target default.target - Main User Target. Dec 13 23:06:41.588753 systemd[1730]: Startup finished in 151ms. Dec 13 23:06:41.589042 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 23:06:41.601393 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 23:06:41.619392 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:33402.service - OpenSSH per-connection server daemon (10.0.0.1:33402). Dec 13 23:06:41.666443 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:41.667771 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:41.672478 systemd-logind[1585]: New session 3 of user core. Dec 13 23:06:41.682395 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 23:06:41.693245 sshd[1748]: Connection closed by 10.0.0.1 port 33402 Dec 13 23:06:41.693619 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 13 23:06:41.700965 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:33402.service: Deactivated successfully. Dec 13 23:06:41.703479 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 23:06:41.704610 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Dec 13 23:06:41.706455 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:33410.service - OpenSSH per-connection server daemon (10.0.0.1:33410). Dec 13 23:06:41.707075 systemd-logind[1585]: Removed session 3. Dec 13 23:06:41.762947 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 33410 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:41.764196 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:41.768059 systemd-logind[1585]: New session 4 of user core. Dec 13 23:06:41.776345 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 23:06:41.784107 sshd[1758]: Connection closed by 10.0.0.1 port 33410 Dec 13 23:06:41.784444 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 13 23:06:41.804355 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:33410.service: Deactivated successfully. Dec 13 23:06:41.806227 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 23:06:41.807036 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Dec 13 23:06:41.809658 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:33418.service - OpenSSH per-connection server daemon (10.0.0.1:33418). Dec 13 23:06:41.810492 systemd-logind[1585]: Removed session 4. Dec 13 23:06:41.860431 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 33418 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:41.861694 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:41.866954 systemd-logind[1585]: New session 5 of user core. Dec 13 23:06:41.873298 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 23:06:41.885410 sshd[1768]: Connection closed by 10.0.0.1 port 33418 Dec 13 23:06:41.885798 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 13 23:06:41.894315 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:33418.service: Deactivated successfully. Dec 13 23:06:41.895839 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 23:06:41.896656 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Dec 13 23:06:41.898944 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:33426.service - OpenSSH per-connection server daemon (10.0.0.1:33426). Dec 13 23:06:41.899405 systemd-logind[1585]: Removed session 5. Dec 13 23:06:41.953199 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 33426 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:41.954565 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:41.959194 systemd-logind[1585]: New session 6 of user core. Dec 13 23:06:41.970305 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 23:06:41.986794 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 23:06:41.987068 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 23:06:42.010038 sudo[1779]: pam_unix(sudo:session): session closed for user root Dec 13 23:06:42.011405 sshd[1778]: Connection closed by 10.0.0.1 port 33426 Dec 13 23:06:42.011701 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Dec 13 23:06:42.029277 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:33426.service: Deactivated successfully. Dec 13 23:06:42.030978 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 23:06:42.032638 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Dec 13 23:06:42.034201 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:33432.service - OpenSSH per-connection server daemon (10.0.0.1:33432). Dec 13 23:06:42.035317 systemd-logind[1585]: Removed session 6. Dec 13 23:06:42.086342 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 33432 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:42.087630 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:42.093904 systemd-logind[1585]: New session 7 of user core. Dec 13 23:06:42.103728 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 23:06:42.117024 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 23:06:42.117332 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 23:06:42.121658 sudo[1792]: pam_unix(sudo:session): session closed for user root Dec 13 23:06:42.129626 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 23:06:42.129883 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 23:06:42.136637 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 23:06:42.175000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 23:06:42.178612 kernel: kauditd_printk_skb: 195 callbacks suppressed Dec 13 23:06:42.178645 kernel: audit: type=1305 audit(1765667202.175:234): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 23:06:42.175000 audit[1816]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffb1aa580 a2=420 a3=0 items=0 ppid=1797 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:42.182152 kernel: audit: type=1300 audit(1765667202.175:234): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffb1aa580 a2=420 a3=0 items=0 ppid=1797 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:42.182261 augenrules[1816]: No rules Dec 13 23:06:42.175000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 23:06:42.183799 kernel: audit: type=1327 audit(1765667202.175:234): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 23:06:42.184347 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 23:06:42.186203 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 23:06:42.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.187984 sudo[1791]: pam_unix(sudo:session): session closed for user root Dec 13 23:06:42.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.189187 sshd[1790]: Connection closed by 10.0.0.1 port 33432 Dec 13 23:06:42.189609 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Dec 13 23:06:42.191728 kernel: audit: type=1130 audit(1765667202.185:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.191809 kernel: audit: type=1131 audit(1765667202.185:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.185000 audit[1791]: USER_END pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.194693 kernel: audit: type=1106 audit(1765667202.185:237): pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.185000 audit[1791]: CRED_DISP pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.197190 kernel: audit: type=1104 audit(1765667202.185:238): pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.197217 kernel: audit: type=1106 audit(1765667202.188:239): pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.188000 audit[1786]: USER_END pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.200878 kernel: audit: type=1104 audit(1765667202.188:240): pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.188000 audit[1786]: CRED_DISP pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.59:22-10.0.0.1:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.209515 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:33432.service: Deactivated successfully. Dec 13 23:06:42.211723 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 23:06:42.213161 kernel: audit: type=1131 audit(1765667202.208:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.59:22-10.0.0.1:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.213632 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Dec 13 23:06:42.215529 systemd-logind[1585]: Removed session 7. Dec 13 23:06:42.217511 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:33442.service - OpenSSH per-connection server daemon (10.0.0.1:33442). Dec 13 23:06:42.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:33442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.273000 audit[1825]: USER_ACCT pid=1825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.275243 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 33442 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:06:42.274000 audit[1825]: CRED_ACQ pid=1825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.274000 audit[1825]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1440ef0 a2=3 a3=0 items=0 ppid=1 pid=1825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:42.274000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:06:42.276473 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:06:42.281217 systemd-logind[1585]: New session 8 of user core. Dec 13 23:06:42.290321 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 23:06:42.291000 audit[1825]: USER_START pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.292000 audit[1829]: CRED_ACQ pid=1829 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:06:42.300000 audit[1830]: USER_ACCT pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.302016 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 23:06:42.301000 audit[1830]: CRED_REFR pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.301000 audit[1830]: USER_START pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:06:42.302333 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 23:06:42.583118 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 23:06:42.599407 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 23:06:42.850402 dockerd[1851]: time="2025-12-13T23:06:42.850274370Z" level=info msg="Starting up" Dec 13 23:06:42.853132 dockerd[1851]: time="2025-12-13T23:06:42.852951121Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 23:06:42.863594 dockerd[1851]: time="2025-12-13T23:06:42.863561353Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 23:06:42.881155 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport541797230-merged.mount: Deactivated successfully. Dec 13 23:06:43.062429 dockerd[1851]: time="2025-12-13T23:06:43.062373702Z" level=info msg="Loading containers: start." Dec 13 23:06:43.074807 kernel: Initializing XFRM netlink socket Dec 13 23:06:43.114000 audit[1907]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.114000 audit[1907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdfb56e00 a2=0 a3=0 items=0 ppid=1851 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 23:06:43.116000 audit[1909]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.116000 audit[1909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc537cd00 a2=0 a3=0 items=0 ppid=1851 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.116000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 23:06:43.118000 audit[1911]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.118000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffcf1480 a2=0 a3=0 items=0 ppid=1851 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 23:06:43.120000 audit[1913]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.120000 audit[1913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb3c89a0 a2=0 a3=0 items=0 ppid=1851 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.120000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 23:06:43.121000 audit[1915]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.121000 audit[1915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff326b490 a2=0 a3=0 items=0 ppid=1851 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 23:06:43.123000 audit[1917]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.123000 audit[1917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff2fb3670 a2=0 a3=0 items=0 ppid=1851 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.123000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 23:06:43.126000 audit[1919]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.126000 audit[1919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff472ae50 a2=0 a3=0 items=0 ppid=1851 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 23:06:43.128000 audit[1921]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.128000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff4ad4490 a2=0 a3=0 items=0 ppid=1851 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 23:06:43.151000 audit[1924]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.151000 audit[1924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd592b5b0 a2=0 a3=0 items=0 ppid=1851 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 23:06:43.153000 audit[1926]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.153000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc86e54d0 a2=0 a3=0 items=0 ppid=1851 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 23:06:43.154000 audit[1928]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.154000 audit[1928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffee8389f0 a2=0 a3=0 items=0 ppid=1851 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 23:06:43.156000 audit[1930]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.156000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd218d7f0 a2=0 a3=0 items=0 ppid=1851 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 23:06:43.158000 audit[1932]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.158000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd6baee90 a2=0 a3=0 items=0 ppid=1851 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 23:06:43.189000 audit[1962]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.189000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe65d3500 a2=0 a3=0 items=0 ppid=1851 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 23:06:43.190000 audit[1964]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.190000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc4043a90 a2=0 a3=0 items=0 ppid=1851 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 23:06:43.192000 audit[1966]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.192000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb7ec8a0 a2=0 a3=0 items=0 ppid=1851 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 23:06:43.194000 audit[1968]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.194000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8c135e0 a2=0 a3=0 items=0 ppid=1851 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 23:06:43.196000 audit[1970]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.196000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd517a930 a2=0 a3=0 items=0 ppid=1851 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 23:06:43.198000 audit[1972]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.198000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd8363f50 a2=0 a3=0 items=0 ppid=1851 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 23:06:43.200000 audit[1974]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.200000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd8e09110 a2=0 a3=0 items=0 ppid=1851 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 23:06:43.202000 audit[1976]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.202000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc8c80740 a2=0 a3=0 items=0 ppid=1851 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.202000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 23:06:43.204000 audit[1978]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.204000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe127d4a0 a2=0 a3=0 items=0 ppid=1851 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 13 23:06:43.206000 audit[1980]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.206000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff78e9590 a2=0 a3=0 items=0 ppid=1851 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.206000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 23:06:43.208000 audit[1982]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.208000 audit[1982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd55648b0 a2=0 a3=0 items=0 ppid=1851 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 23:06:43.210000 audit[1984]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.210000 audit[1984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdc3a7fd0 a2=0 a3=0 items=0 ppid=1851 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 23:06:43.212000 audit[1986]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.212000 audit[1986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffad7a950 a2=0 a3=0 items=0 ppid=1851 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 23:06:43.217000 audit[1991]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.217000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffca10f160 a2=0 a3=0 items=0 ppid=1851 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 23:06:43.219000 audit[1993]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.219000 audit[1993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe0416a00 a2=0 a3=0 items=0 ppid=1851 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.219000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 23:06:43.221000 audit[1995]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.221000 audit[1995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcd5a7b80 a2=0 a3=0 items=0 ppid=1851 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 23:06:43.223000 audit[1997]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.223000 audit[1997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8bda6b0 a2=0 a3=0 items=0 ppid=1851 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 23:06:43.225000 audit[1999]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.225000 audit[1999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc8e5dcf0 a2=0 a3=0 items=0 ppid=1851 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 23:06:43.227000 audit[2001]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:43.227000 audit[2001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd162f460 a2=0 a3=0 items=0 ppid=1851 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 23:06:43.240000 audit[2005]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.240000 audit[2005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffefbe1000 a2=0 a3=0 items=0 ppid=1851 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 23:06:43.243000 audit[2007]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.243000 audit[2007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd1eaf5e0 a2=0 a3=0 items=0 ppid=1851 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 23:06:43.249000 audit[2015]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.249000 audit[2015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc33445e0 a2=0 a3=0 items=0 ppid=1851 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 13 23:06:43.257000 audit[2021]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.257000 audit[2021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe227a4a0 a2=0 a3=0 items=0 ppid=1851 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 13 23:06:43.260000 audit[2023]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.260000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe73fa6a0 a2=0 a3=0 items=0 ppid=1851 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 23:06:43.262000 audit[2025]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.262000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcc3508e0 a2=0 a3=0 items=0 ppid=1851 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 13 23:06:43.263000 audit[2027]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.263000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc20f70d0 a2=0 a3=0 items=0 ppid=1851 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 23:06:43.266000 audit[2029]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:43.266000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff6f3ede0 a2=0 a3=0 items=0 ppid=1851 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:43.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 23:06:43.267763 systemd-networkd[1296]: docker0: Link UP Dec 13 23:06:43.270852 dockerd[1851]: time="2025-12-13T23:06:43.270809199Z" level=info msg="Loading containers: done." Dec 13 23:06:43.287266 dockerd[1851]: time="2025-12-13T23:06:43.287210309Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 23:06:43.287403 dockerd[1851]: time="2025-12-13T23:06:43.287294862Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 23:06:43.288004 dockerd[1851]: time="2025-12-13T23:06:43.287960613Z" level=info msg="Initializing buildkit" Dec 13 23:06:43.312195 dockerd[1851]: time="2025-12-13T23:06:43.312164726Z" level=info msg="Completed buildkit initialization" Dec 13 23:06:43.317359 dockerd[1851]: time="2025-12-13T23:06:43.317329447Z" level=info msg="Daemon has completed initialization" Dec 13 23:06:43.317550 dockerd[1851]: time="2025-12-13T23:06:43.317443895Z" level=info msg="API listen on /run/docker.sock" Dec 13 23:06:43.317594 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 23:06:43.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:43.878450 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2695609716-merged.mount: Deactivated successfully. Dec 13 23:06:43.883339 containerd[1603]: time="2025-12-13T23:06:43.883283999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 13 23:06:44.685467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304454942.mount: Deactivated successfully. Dec 13 23:06:45.256441 containerd[1603]: time="2025-12-13T23:06:45.256381038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:45.257337 containerd[1603]: time="2025-12-13T23:06:45.257287912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Dec 13 23:06:45.258396 containerd[1603]: time="2025-12-13T23:06:45.258360871Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:45.260783 containerd[1603]: time="2025-12-13T23:06:45.260743850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:45.261846 containerd[1603]: time="2025-12-13T23:06:45.261812672Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.378478209s" Dec 13 23:06:45.261903 containerd[1603]: time="2025-12-13T23:06:45.261853879Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 13 23:06:45.263018 containerd[1603]: time="2025-12-13T23:06:45.262990392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 13 23:06:46.449238 containerd[1603]: time="2025-12-13T23:06:46.449194776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:46.449991 containerd[1603]: time="2025-12-13T23:06:46.449947249Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 13 23:06:46.451052 containerd[1603]: time="2025-12-13T23:06:46.451008646Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:46.453966 containerd[1603]: time="2025-12-13T23:06:46.453922133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:46.455623 containerd[1603]: time="2025-12-13T23:06:46.455593200Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.1925722s" Dec 13 23:06:46.455751 containerd[1603]: time="2025-12-13T23:06:46.455712721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 13 23:06:46.456225 containerd[1603]: time="2025-12-13T23:06:46.456203762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 13 23:06:47.578789 containerd[1603]: time="2025-12-13T23:06:47.578736457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:47.579666 containerd[1603]: time="2025-12-13T23:06:47.579347634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 13 23:06:47.580389 containerd[1603]: time="2025-12-13T23:06:47.580358798Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:47.583779 containerd[1603]: time="2025-12-13T23:06:47.582791460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:47.583910 containerd[1603]: time="2025-12-13T23:06:47.583885782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.127650779s" Dec 13 23:06:47.583972 containerd[1603]: time="2025-12-13T23:06:47.583960572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 13 23:06:47.584624 containerd[1603]: time="2025-12-13T23:06:47.584601302Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 13 23:06:48.466702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827348153.mount: Deactivated successfully. Dec 13 23:06:48.695610 containerd[1603]: time="2025-12-13T23:06:48.695544185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:48.696637 containerd[1603]: time="2025-12-13T23:06:48.696582032Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Dec 13 23:06:48.697262 containerd[1603]: time="2025-12-13T23:06:48.697225428Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:48.699504 containerd[1603]: time="2025-12-13T23:06:48.699468392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:48.700053 containerd[1603]: time="2025-12-13T23:06:48.700018968Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.11538663s" Dec 13 23:06:48.700053 containerd[1603]: time="2025-12-13T23:06:48.700051954Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 13 23:06:48.700639 containerd[1603]: time="2025-12-13T23:06:48.700619245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 13 23:06:49.377031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 23:06:49.378608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:06:49.383082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089310992.mount: Deactivated successfully. Dec 13 23:06:49.525497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:06:49.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:49.526378 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 13 23:06:49.526457 kernel: audit: type=1130 audit(1765667209.524:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:49.529591 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 23:06:49.567774 kubelet[2168]: E1213 23:06:49.567705 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 23:06:49.570851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 23:06:49.570970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 23:06:49.572244 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.4M memory peak. Dec 13 23:06:49.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 23:06:49.576137 kernel: audit: type=1131 audit(1765667209.571:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 23:06:49.954431 containerd[1603]: time="2025-12-13T23:06:49.954380406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:49.955528 containerd[1603]: time="2025-12-13T23:06:49.955431005Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 13 23:06:49.956332 containerd[1603]: time="2025-12-13T23:06:49.956274687Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:49.959495 containerd[1603]: time="2025-12-13T23:06:49.959442012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:49.960675 containerd[1603]: time="2025-12-13T23:06:49.960339490Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.259691066s" Dec 13 23:06:49.960675 containerd[1603]: time="2025-12-13T23:06:49.960374252Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 13 23:06:49.960826 containerd[1603]: time="2025-12-13T23:06:49.960727960Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 23:06:50.478841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530789179.mount: Deactivated successfully. Dec 13 23:06:50.484414 containerd[1603]: time="2025-12-13T23:06:50.484011841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 23:06:50.484718 containerd[1603]: time="2025-12-13T23:06:50.484665927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 23:06:50.485587 containerd[1603]: time="2025-12-13T23:06:50.485550893Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 23:06:50.488125 containerd[1603]: time="2025-12-13T23:06:50.488087147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 23:06:50.488685 containerd[1603]: time="2025-12-13T23:06:50.488646359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 527.885552ms" Dec 13 23:06:50.488685 containerd[1603]: time="2025-12-13T23:06:50.488680643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 23:06:50.489285 containerd[1603]: time="2025-12-13T23:06:50.489263019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 13 23:06:51.160393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994454323.mount: Deactivated successfully. Dec 13 23:06:52.751522 containerd[1603]: time="2025-12-13T23:06:52.751379909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:52.752844 containerd[1603]: time="2025-12-13T23:06:52.752504475Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Dec 13 23:06:52.753594 containerd[1603]: time="2025-12-13T23:06:52.753561673Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:52.756998 containerd[1603]: time="2025-12-13T23:06:52.756955923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:06:52.758277 containerd[1603]: time="2025-12-13T23:06:52.758239515Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.268944041s" Dec 13 23:06:52.758420 containerd[1603]: time="2025-12-13T23:06:52.758386515Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 13 23:06:58.097543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:06:58.097704 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.4M memory peak. Dec 13 23:06:58.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:58.099825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:06:58.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:58.103522 kernel: audit: type=1130 audit(1765667218.096:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:58.103594 kernel: audit: type=1131 audit(1765667218.096:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:58.124424 systemd[1]: Reload requested from client PID 2307 ('systemctl') (unit session-8.scope)... Dec 13 23:06:58.124449 systemd[1]: Reloading... Dec 13 23:06:58.201170 zram_generator::config[2353]: No configuration found. Dec 13 23:06:58.444047 systemd[1]: Reloading finished in 319 ms. Dec 13 23:06:58.473000 audit: BPF prog-id=67 op=LOAD Dec 13 23:06:58.475135 kernel: audit: type=1334 audit(1765667218.473:296): prog-id=67 op=LOAD Dec 13 23:06:58.477670 kernel: audit: type=1334 audit(1765667218.474:297): prog-id=56 op=UNLOAD Dec 13 23:06:58.477735 kernel: audit: type=1334 audit(1765667218.474:298): prog-id=68 op=LOAD Dec 13 23:06:58.477754 kernel: audit: type=1334 audit(1765667218.474:299): prog-id=49 op=UNLOAD Dec 13 23:06:58.477784 kernel: audit: type=1334 audit(1765667218.474:300): prog-id=69 op=LOAD Dec 13 23:06:58.477812 kernel: audit: type=1334 audit(1765667218.474:301): prog-id=70 op=LOAD Dec 13 23:06:58.477828 kernel: audit: type=1334 audit(1765667218.474:302): prog-id=50 op=UNLOAD Dec 13 23:06:58.477846 kernel: audit: type=1334 audit(1765667218.474:303): prog-id=51 op=UNLOAD Dec 13 23:06:58.474000 audit: BPF prog-id=56 op=UNLOAD Dec 13 23:06:58.474000 audit: BPF prog-id=68 op=LOAD Dec 13 23:06:58.474000 audit: BPF prog-id=49 op=UNLOAD Dec 13 23:06:58.474000 audit: BPF prog-id=69 op=LOAD Dec 13 23:06:58.474000 audit: BPF prog-id=70 op=LOAD Dec 13 23:06:58.474000 audit: BPF prog-id=50 op=UNLOAD Dec 13 23:06:58.474000 audit: BPF prog-id=51 op=UNLOAD Dec 13 23:06:58.476000 audit: BPF prog-id=71 op=LOAD Dec 13 23:06:58.476000 audit: BPF prog-id=52 op=UNLOAD Dec 13 23:06:58.477000 audit: BPF prog-id=72 op=LOAD Dec 13 23:06:58.477000 audit: BPF prog-id=60 op=UNLOAD Dec 13 23:06:58.479000 audit: BPF prog-id=73 op=LOAD Dec 13 23:06:58.480000 audit: BPF prog-id=74 op=LOAD Dec 13 23:06:58.480000 audit: BPF prog-id=61 op=UNLOAD Dec 13 23:06:58.480000 audit: BPF prog-id=62 op=UNLOAD Dec 13 23:06:58.480000 audit: BPF prog-id=75 op=LOAD Dec 13 23:06:58.480000 audit: BPF prog-id=53 op=UNLOAD Dec 13 23:06:58.480000 audit: BPF prog-id=76 op=LOAD Dec 13 23:06:58.480000 audit: BPF prog-id=77 op=LOAD Dec 13 23:06:58.480000 audit: BPF prog-id=54 op=UNLOAD Dec 13 23:06:58.480000 audit: BPF prog-id=55 op=UNLOAD Dec 13 23:06:58.495000 audit: BPF prog-id=78 op=LOAD Dec 13 23:06:58.495000 audit: BPF prog-id=64 op=UNLOAD Dec 13 23:06:58.495000 audit: BPF prog-id=79 op=LOAD Dec 13 23:06:58.495000 audit: BPF prog-id=80 op=LOAD Dec 13 23:06:58.495000 audit: BPF prog-id=65 op=UNLOAD Dec 13 23:06:58.495000 audit: BPF prog-id=66 op=UNLOAD Dec 13 23:06:58.495000 audit: BPF prog-id=81 op=LOAD Dec 13 23:06:58.495000 audit: BPF prog-id=63 op=UNLOAD Dec 13 23:06:58.496000 audit: BPF prog-id=82 op=LOAD Dec 13 23:06:58.496000 audit: BPF prog-id=83 op=LOAD Dec 13 23:06:58.496000 audit: BPF prog-id=47 op=UNLOAD Dec 13 23:06:58.496000 audit: BPF prog-id=48 op=UNLOAD Dec 13 23:06:58.496000 audit: BPF prog-id=84 op=LOAD Dec 13 23:06:58.496000 audit: BPF prog-id=57 op=UNLOAD Dec 13 23:06:58.496000 audit: BPF prog-id=85 op=LOAD Dec 13 23:06:58.496000 audit: BPF prog-id=86 op=LOAD Dec 13 23:06:58.496000 audit: BPF prog-id=58 op=UNLOAD Dec 13 23:06:58.496000 audit: BPF prog-id=59 op=UNLOAD Dec 13 23:06:58.514643 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 23:06:58.514722 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 23:06:58.515133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:06:58.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 23:06:58.515191 systemd[1]: kubelet.service: Consumed 98ms CPU time, 95.1M memory peak. Dec 13 23:06:58.516888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:06:58.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:06:58.637363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:06:58.642276 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 23:06:58.676093 kubelet[2398]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:06:58.676093 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 23:06:58.676093 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:06:58.676443 kubelet[2398]: I1213 23:06:58.676201 2398 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 23:06:59.162393 kubelet[2398]: I1213 23:06:59.162326 2398 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 13 23:06:59.162393 kubelet[2398]: I1213 23:06:59.162358 2398 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 23:06:59.162633 kubelet[2398]: I1213 23:06:59.162596 2398 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 23:06:59.183302 kubelet[2398]: E1213 23:06:59.183259 2398 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 23:06:59.183880 kubelet[2398]: I1213 23:06:59.183845 2398 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 23:06:59.192164 kubelet[2398]: I1213 23:06:59.192142 2398 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 23:06:59.194823 kubelet[2398]: I1213 23:06:59.194783 2398 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 23:06:59.195830 kubelet[2398]: I1213 23:06:59.195782 2398 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 23:06:59.195990 kubelet[2398]: I1213 23:06:59.195825 2398 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 23:06:59.196100 kubelet[2398]: I1213 23:06:59.196045 2398 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 23:06:59.196100 kubelet[2398]: I1213 23:06:59.196054 2398 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 23:06:59.196774 kubelet[2398]: I1213 23:06:59.196738 2398 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:06:59.199192 kubelet[2398]: I1213 23:06:59.199173 2398 kubelet.go:480] "Attempting to sync node with API server" Dec 13 23:06:59.199225 kubelet[2398]: I1213 23:06:59.199203 2398 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 23:06:59.199251 kubelet[2398]: I1213 23:06:59.199230 2398 kubelet.go:386] "Adding apiserver pod source" Dec 13 23:06:59.199251 kubelet[2398]: I1213 23:06:59.199243 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 23:06:59.200861 kubelet[2398]: I1213 23:06:59.200343 2398 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 23:06:59.200861 kubelet[2398]: E1213 23:06:59.200750 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 23:06:59.201137 kubelet[2398]: I1213 23:06:59.201115 2398 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 23:06:59.201247 kubelet[2398]: W1213 23:06:59.201232 2398 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 23:06:59.201356 kubelet[2398]: E1213 23:06:59.201309 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 23:06:59.203592 kubelet[2398]: I1213 23:06:59.203570 2398 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 23:06:59.203661 kubelet[2398]: I1213 23:06:59.203611 2398 server.go:1289] "Started kubelet" Dec 13 23:06:59.203682 kubelet[2398]: I1213 23:06:59.203660 2398 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 23:06:59.204627 kubelet[2398]: I1213 23:06:59.204554 2398 server.go:317] "Adding debug handlers to kubelet server" Dec 13 23:06:59.204786 kubelet[2398]: I1213 23:06:59.204711 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 23:06:59.206783 kubelet[2398]: I1213 23:06:59.206760 2398 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 23:06:59.207991 kubelet[2398]: E1213 23:06:59.207038 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880e8fb202e749b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 23:06:59.203585179 +0000 UTC m=+0.557828448,LastTimestamp:2025-12-13 23:06:59.203585179 +0000 UTC m=+0.557828448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 23:06:59.208762 kubelet[2398]: I1213 23:06:59.208730 2398 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 23:06:59.209091 kubelet[2398]: E1213 23:06:59.209067 2398 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:06:59.209193 kubelet[2398]: I1213 23:06:59.209131 2398 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 23:06:59.209429 kubelet[2398]: I1213 23:06:59.209413 2398 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 23:06:59.209810 kubelet[2398]: I1213 23:06:59.209788 2398 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 23:06:59.209949 kubelet[2398]: I1213 23:06:59.209937 2398 reconciler.go:26] "Reconciler: start to sync state" Dec 13 23:06:59.209998 kubelet[2398]: I1213 23:06:59.209946 2398 factory.go:223] Registration of the systemd container factory successfully Dec 13 23:06:59.210188 kubelet[2398]: I1213 23:06:59.210156 2398 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 23:06:59.210929 kubelet[2398]: E1213 23:06:59.210881 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 23:06:59.211356 kubelet[2398]: I1213 23:06:59.211335 2398 factory.go:223] Registration of the containerd container factory successfully Dec 13 23:06:59.211483 kubelet[2398]: E1213 23:06:59.211444 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Dec 13 23:06:59.211000 audit[2415]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.211000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc5485b70 a2=0 a3=0 items=0 ppid=2398 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 23:06:59.212000 audit[2416]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.212000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe905bde0 a2=0 a3=0 items=0 ppid=2398 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 23:06:59.214305 kubelet[2398]: E1213 23:06:59.214279 2398 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 23:06:59.215000 audit[2421]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.215000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc59ab430 a2=0 a3=0 items=0 ppid=2398 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 23:06:59.217000 audit[2423]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.217000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc1504b70 a2=0 a3=0 items=0 ppid=2398 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 23:06:59.223367 kubelet[2398]: I1213 23:06:59.223324 2398 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 23:06:59.223367 kubelet[2398]: I1213 23:06:59.223348 2398 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 23:06:59.223367 kubelet[2398]: I1213 23:06:59.223365 2398 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:06:59.224000 audit[2428]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.224000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffff856050 a2=0 a3=0 items=0 ppid=2398 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 23:06:59.225000 audit[2430]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:59.225000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe348e3b0 a2=0 a3=0 items=0 ppid=2398 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 23:06:59.225000 audit[2431]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.225000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7118e30 a2=0 a3=0 items=0 ppid=2398 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 23:06:59.226000 audit[2432]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.226000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0c00540 a2=0 a3=0 items=0 ppid=2398 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 23:06:59.226000 audit[2433]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:59.226000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe119dc0 a2=0 a3=0 items=0 ppid=2398 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 23:06:59.227000 audit[2434]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:06:59.227000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb83d7f0 a2=0 a3=0 items=0 ppid=2398 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 23:06:59.227000 audit[2435]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:59.227000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7eb5550 a2=0 a3=0 items=0 ppid=2398 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 23:06:59.228000 audit[2436]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:06:59.228000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc01ecac0 a2=0 a3=0 items=0 ppid=2398 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.225871 2398 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.227172 2398 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.227190 2398 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.227207 2398 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.227216 2398 kubelet.go:2436] "Starting kubelet main sync loop" Dec 13 23:06:59.305789 kubelet[2398]: E1213 23:06:59.227252 2398 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.305198 2398 policy_none.go:49] "None policy: Start" Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.305221 2398 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 23:06:59.305789 kubelet[2398]: I1213 23:06:59.305233 2398 state_mem.go:35] "Initializing new in-memory state store" Dec 13 23:06:59.305789 kubelet[2398]: E1213 23:06:59.304987 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 23:06:59.309996 kubelet[2398]: E1213 23:06:59.309956 2398 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:06:59.310547 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 23:06:59.319809 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 23:06:59.322797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 23:06:59.327722 kubelet[2398]: E1213 23:06:59.327684 2398 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 23:06:59.331027 kubelet[2398]: E1213 23:06:59.330821 2398 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 23:06:59.331271 kubelet[2398]: I1213 23:06:59.331254 2398 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 23:06:59.331308 kubelet[2398]: I1213 23:06:59.331274 2398 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 23:06:59.332160 kubelet[2398]: I1213 23:06:59.331575 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 23:06:59.332521 kubelet[2398]: E1213 23:06:59.332498 2398 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 23:06:59.332781 kubelet[2398]: E1213 23:06:59.332758 2398 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 23:06:59.412144 kubelet[2398]: E1213 23:06:59.412075 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Dec 13 23:06:59.433344 kubelet[2398]: I1213 23:06:59.433259 2398 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:06:59.433829 kubelet[2398]: E1213 23:06:59.433796 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 13 23:06:59.537772 systemd[1]: Created slice kubepods-burstable-podd4d7e2c0c9e1c9e26bebce3bd3b8c301.slice - libcontainer container kubepods-burstable-podd4d7e2c0c9e1c9e26bebce3bd3b8c301.slice. Dec 13 23:06:59.561667 kubelet[2398]: E1213 23:06:59.561613 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:06:59.564836 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 13 23:06:59.575298 kubelet[2398]: E1213 23:06:59.575258 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:06:59.578057 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 13 23:06:59.579999 kubelet[2398]: E1213 23:06:59.579854 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:06:59.611508 kubelet[2398]: I1213 23:06:59.611454 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:06:59.611583 kubelet[2398]: I1213 23:06:59.611526 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 13 23:06:59.611583 kubelet[2398]: I1213 23:06:59.611549 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4d7e2c0c9e1c9e26bebce3bd3b8c301-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d7e2c0c9e1c9e26bebce3bd3b8c301\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:06:59.611583 kubelet[2398]: I1213 23:06:59.611569 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4d7e2c0c9e1c9e26bebce3bd3b8c301-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d7e2c0c9e1c9e26bebce3bd3b8c301\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:06:59.611583 kubelet[2398]: I1213 23:06:59.611582 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:06:59.611693 kubelet[2398]: I1213 23:06:59.611597 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:06:59.611693 kubelet[2398]: I1213 23:06:59.611614 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4d7e2c0c9e1c9e26bebce3bd3b8c301-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d4d7e2c0c9e1c9e26bebce3bd3b8c301\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:06:59.611693 kubelet[2398]: I1213 23:06:59.611637 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:06:59.611693 kubelet[2398]: I1213 23:06:59.611674 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:06:59.635647 kubelet[2398]: I1213 23:06:59.635589 2398 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:06:59.636004 kubelet[2398]: E1213 23:06:59.635965 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 13 23:06:59.813669 kubelet[2398]: E1213 23:06:59.813618 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Dec 13 23:06:59.862477 kubelet[2398]: E1213 23:06:59.862441 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:06:59.863172 containerd[1603]: time="2025-12-13T23:06:59.863092366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d4d7e2c0c9e1c9e26bebce3bd3b8c301,Namespace:kube-system,Attempt:0,}" Dec 13 23:06:59.876388 kubelet[2398]: E1213 23:06:59.876328 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:06:59.876937 containerd[1603]: time="2025-12-13T23:06:59.876867650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 13 23:06:59.880192 kubelet[2398]: E1213 23:06:59.880170 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:06:59.880845 containerd[1603]: time="2025-12-13T23:06:59.880779686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 13 23:06:59.884029 containerd[1603]: time="2025-12-13T23:06:59.883998074Z" level=info msg="connecting to shim d65ddb5ebb139e1453f1d440aafacb82713e2b241208d80ff8ee6d4c982aa003" address="unix:///run/containerd/s/427e8663125eb657c0c5bb4fd3570f2975af4958d9748bf23828045014f05777" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:06:59.905607 containerd[1603]: time="2025-12-13T23:06:59.905555179Z" level=info msg="connecting to shim 15135ee4ee91c7de8463487f098dce13c238fc201048d1b28130855b6c38947d" address="unix:///run/containerd/s/0c8703da9c2f7a441828f3b4715ea36dfc90214a42837a6202cc2d3fe99dfd90" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:06:59.909719 containerd[1603]: time="2025-12-13T23:06:59.909659949Z" level=info msg="connecting to shim dedf17ed58d2cb595f48c8db1ced487e9ce9b6879c12098d57c91211c6456242" address="unix:///run/containerd/s/deb397a48a52eb008f932bd47a5718c66ce080dddb5f748d86a810f0e72a20f4" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:06:59.915357 systemd[1]: Started cri-containerd-d65ddb5ebb139e1453f1d440aafacb82713e2b241208d80ff8ee6d4c982aa003.scope - libcontainer container d65ddb5ebb139e1453f1d440aafacb82713e2b241208d80ff8ee6d4c982aa003. Dec 13 23:06:59.933310 systemd[1]: Started cri-containerd-15135ee4ee91c7de8463487f098dce13c238fc201048d1b28130855b6c38947d.scope - libcontainer container 15135ee4ee91c7de8463487f098dce13c238fc201048d1b28130855b6c38947d. Dec 13 23:06:59.934000 audit: BPF prog-id=87 op=LOAD Dec 13 23:06:59.936000 audit: BPF prog-id=88 op=LOAD Dec 13 23:06:59.936000 audit[2455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.937000 audit: BPF prog-id=88 op=UNLOAD Dec 13 23:06:59.937000 audit[2455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.937000 audit: BPF prog-id=89 op=LOAD Dec 13 23:06:59.937000 audit[2455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.937000 audit: BPF prog-id=90 op=LOAD Dec 13 23:06:59.937000 audit[2455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.937000 audit: BPF prog-id=90 op=UNLOAD Dec 13 23:06:59.937000 audit[2455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.937000 audit: BPF prog-id=89 op=UNLOAD Dec 13 23:06:59.937000 audit[2455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.937000 audit: BPF prog-id=91 op=LOAD Dec 13 23:06:59.937000 audit[2455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2445 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436356464623565626231333965313435336631643434306161666163 Dec 13 23:06:59.945338 systemd[1]: Started cri-containerd-dedf17ed58d2cb595f48c8db1ced487e9ce9b6879c12098d57c91211c6456242.scope - libcontainer container dedf17ed58d2cb595f48c8db1ced487e9ce9b6879c12098d57c91211c6456242. Dec 13 23:06:59.948000 audit: BPF prog-id=92 op=LOAD Dec 13 23:06:59.948000 audit: BPF prog-id=93 op=LOAD Dec 13 23:06:59.948000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.949000 audit: BPF prog-id=93 op=UNLOAD Dec 13 23:06:59.949000 audit[2500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.949000 audit: BPF prog-id=94 op=LOAD Dec 13 23:06:59.949000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.949000 audit: BPF prog-id=95 op=LOAD Dec 13 23:06:59.949000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.949000 audit: BPF prog-id=95 op=UNLOAD Dec 13 23:06:59.949000 audit[2500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.949000 audit: BPF prog-id=94 op=UNLOAD Dec 13 23:06:59.949000 audit[2500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.949000 audit: BPF prog-id=96 op=LOAD Dec 13 23:06:59.949000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2477 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313335656534656539316337646538343633343837663039386463 Dec 13 23:06:59.966000 audit: BPF prog-id=97 op=LOAD Dec 13 23:06:59.967000 audit: BPF prog-id=98 op=LOAD Dec 13 23:06:59.967000 audit[2519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.967000 audit: BPF prog-id=98 op=UNLOAD Dec 13 23:06:59.967000 audit[2519]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.967000 audit: BPF prog-id=99 op=LOAD Dec 13 23:06:59.967000 audit[2519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.968000 audit: BPF prog-id=100 op=LOAD Dec 13 23:06:59.968000 audit[2519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.968000 audit: BPF prog-id=100 op=UNLOAD Dec 13 23:06:59.968000 audit[2519]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.968000 audit: BPF prog-id=99 op=UNLOAD Dec 13 23:06:59.968000 audit[2519]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.968000 audit: BPF prog-id=101 op=LOAD Dec 13 23:06:59.968000 audit[2519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2489 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:06:59.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465646631376564353864326362353935663438633864623163656434 Dec 13 23:06:59.969176 containerd[1603]: time="2025-12-13T23:06:59.969138481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d4d7e2c0c9e1c9e26bebce3bd3b8c301,Namespace:kube-system,Attempt:0,} returns sandbox id \"d65ddb5ebb139e1453f1d440aafacb82713e2b241208d80ff8ee6d4c982aa003\"" Dec 13 23:06:59.970676 kubelet[2398]: E1213 23:06:59.970645 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:06:59.977129 containerd[1603]: time="2025-12-13T23:06:59.976461638Z" level=info msg="CreateContainer within sandbox \"d65ddb5ebb139e1453f1d440aafacb82713e2b241208d80ff8ee6d4c982aa003\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 23:06:59.985570 containerd[1603]: time="2025-12-13T23:06:59.985529058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"15135ee4ee91c7de8463487f098dce13c238fc201048d1b28130855b6c38947d\"" Dec 13 23:06:59.987273 kubelet[2398]: E1213 23:06:59.987212 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:06:59.987796 containerd[1603]: time="2025-12-13T23:06:59.987768945Z" level=info msg="Container 87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:06:59.991887 containerd[1603]: time="2025-12-13T23:06:59.991167960Z" level=info msg="CreateContainer within sandbox \"15135ee4ee91c7de8463487f098dce13c238fc201048d1b28130855b6c38947d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 23:06:59.996893 containerd[1603]: time="2025-12-13T23:06:59.996843421Z" level=info msg="CreateContainer within sandbox \"d65ddb5ebb139e1453f1d440aafacb82713e2b241208d80ff8ee6d4c982aa003\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081\"" Dec 13 23:06:59.998007 containerd[1603]: time="2025-12-13T23:06:59.997979980Z" level=info msg="StartContainer for \"87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081\"" Dec 13 23:07:00.000282 containerd[1603]: time="2025-12-13T23:07:00.000164412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"dedf17ed58d2cb595f48c8db1ced487e9ce9b6879c12098d57c91211c6456242\"" Dec 13 23:07:00.000643 containerd[1603]: time="2025-12-13T23:07:00.000617343Z" level=info msg="Container 3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:00.000789 kubelet[2398]: E1213 23:07:00.000769 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:00.001876 containerd[1603]: time="2025-12-13T23:07:00.001847173Z" level=info msg="connecting to shim 87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081" address="unix:///run/containerd/s/427e8663125eb657c0c5bb4fd3570f2975af4958d9748bf23828045014f05777" protocol=ttrpc version=3 Dec 13 23:07:00.004272 containerd[1603]: time="2025-12-13T23:07:00.004219590Z" level=info msg="CreateContainer within sandbox \"dedf17ed58d2cb595f48c8db1ced487e9ce9b6879c12098d57c91211c6456242\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 23:07:00.008281 containerd[1603]: time="2025-12-13T23:07:00.008168688Z" level=info msg="CreateContainer within sandbox \"15135ee4ee91c7de8463487f098dce13c238fc201048d1b28130855b6c38947d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439\"" Dec 13 23:07:00.008861 containerd[1603]: time="2025-12-13T23:07:00.008821714Z" level=info msg="StartContainer for \"3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439\"" Dec 13 23:07:00.012131 containerd[1603]: time="2025-12-13T23:07:00.011127446Z" level=info msg="connecting to shim 3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439" address="unix:///run/containerd/s/0c8703da9c2f7a441828f3b4715ea36dfc90214a42837a6202cc2d3fe99dfd90" protocol=ttrpc version=3 Dec 13 23:07:00.012455 containerd[1603]: time="2025-12-13T23:07:00.012428129Z" level=info msg="Container 0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:00.020487 containerd[1603]: time="2025-12-13T23:07:00.020451761Z" level=info msg="CreateContainer within sandbox \"dedf17ed58d2cb595f48c8db1ced487e9ce9b6879c12098d57c91211c6456242\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6\"" Dec 13 23:07:00.020920 containerd[1603]: time="2025-12-13T23:07:00.020897157Z" level=info msg="StartContainer for \"0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6\"" Dec 13 23:07:00.021360 systemd[1]: Started cri-containerd-87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081.scope - libcontainer container 87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081. Dec 13 23:07:00.021905 containerd[1603]: time="2025-12-13T23:07:00.021875034Z" level=info msg="connecting to shim 0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6" address="unix:///run/containerd/s/deb397a48a52eb008f932bd47a5718c66ce080dddb5f748d86a810f0e72a20f4" protocol=ttrpc version=3 Dec 13 23:07:00.037647 kubelet[2398]: I1213 23:07:00.037622 2398 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:07:00.037983 kubelet[2398]: E1213 23:07:00.037959 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 13 23:07:00.038338 systemd[1]: Started cri-containerd-3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439.scope - libcontainer container 3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439. Dec 13 23:07:00.040000 audit: BPF prog-id=102 op=LOAD Dec 13 23:07:00.041000 audit: BPF prog-id=103 op=LOAD Dec 13 23:07:00.041000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.042000 audit: BPF prog-id=103 op=UNLOAD Dec 13 23:07:00.042000 audit[2574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.042000 audit: BPF prog-id=104 op=LOAD Dec 13 23:07:00.042000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.042000 audit: BPF prog-id=105 op=LOAD Dec 13 23:07:00.042000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.042000 audit: BPF prog-id=105 op=UNLOAD Dec 13 23:07:00.042000 audit[2574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.042000 audit: BPF prog-id=104 op=UNLOAD Dec 13 23:07:00.042000 audit[2574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.042000 audit: BPF prog-id=106 op=LOAD Dec 13 23:07:00.042000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2445 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353038353331663866613461363539613439336138386466633761 Dec 13 23:07:00.055374 systemd[1]: Started cri-containerd-0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6.scope - libcontainer container 0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6. Dec 13 23:07:00.061000 audit: BPF prog-id=107 op=LOAD Dec 13 23:07:00.061000 audit: BPF prog-id=108 op=LOAD Dec 13 23:07:00.061000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.061000 audit: BPF prog-id=108 op=UNLOAD Dec 13 23:07:00.061000 audit[2586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.062000 audit: BPF prog-id=109 op=LOAD Dec 13 23:07:00.062000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.062000 audit: BPF prog-id=110 op=LOAD Dec 13 23:07:00.062000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.062000 audit: BPF prog-id=110 op=UNLOAD Dec 13 23:07:00.062000 audit[2586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.062000 audit: BPF prog-id=109 op=UNLOAD Dec 13 23:07:00.062000 audit[2586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.062000 audit: BPF prog-id=111 op=LOAD Dec 13 23:07:00.062000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2477 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364363161373538356138363337303531363336616466323564613934 Dec 13 23:07:00.069000 audit: BPF prog-id=112 op=LOAD Dec 13 23:07:00.073000 audit: BPF prog-id=113 op=LOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.073000 audit: BPF prog-id=113 op=UNLOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.073000 audit: BPF prog-id=114 op=LOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.073000 audit: BPF prog-id=115 op=LOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.073000 audit: BPF prog-id=115 op=UNLOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.073000 audit: BPF prog-id=114 op=UNLOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.073000 audit: BPF prog-id=116 op=LOAD Dec 13 23:07:00.073000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2489 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:00.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316662636364383564626432316138356165623262393662636139 Dec 13 23:07:00.097178 containerd[1603]: time="2025-12-13T23:07:00.095057101Z" level=info msg="StartContainer for \"87508531f8fa4a659a493a88dfc7a308daab4ae1140d7200a4a0f3f178ff0081\" returns successfully" Dec 13 23:07:00.097178 containerd[1603]: time="2025-12-13T23:07:00.095871430Z" level=info msg="StartContainer for \"3d61a7585a8637051636adf25da9494beee8a90c7d94ea82b0a1f1cf5c129439\" returns successfully" Dec 13 23:07:00.113124 containerd[1603]: time="2025-12-13T23:07:00.112282818Z" level=info msg="StartContainer for \"0b1fbccd85dbd21a85aeb2b96bca94a75611de7448fc6913100b264f3d31aad6\" returns successfully" Dec 13 23:07:00.236380 kubelet[2398]: E1213 23:07:00.236347 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:07:00.236509 kubelet[2398]: E1213 23:07:00.236475 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:00.239096 kubelet[2398]: E1213 23:07:00.239074 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:07:00.239616 kubelet[2398]: E1213 23:07:00.239592 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:00.241133 kubelet[2398]: E1213 23:07:00.241095 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:07:00.241244 kubelet[2398]: E1213 23:07:00.241227 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:00.841151 kubelet[2398]: I1213 23:07:00.841125 2398 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:07:01.243529 kubelet[2398]: E1213 23:07:01.243431 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:07:01.243616 kubelet[2398]: E1213 23:07:01.243560 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:01.243815 kubelet[2398]: E1213 23:07:01.243793 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:07:01.244363 kubelet[2398]: E1213 23:07:01.244344 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:01.244979 kubelet[2398]: E1213 23:07:01.244958 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:07:01.245093 kubelet[2398]: E1213 23:07:01.245073 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:01.916005 kubelet[2398]: E1213 23:07:01.915949 2398 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 23:07:01.983384 kubelet[2398]: I1213 23:07:01.983331 2398 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 23:07:02.010875 kubelet[2398]: I1213 23:07:02.010831 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 23:07:02.017674 kubelet[2398]: E1213 23:07:02.017621 2398 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 23:07:02.017674 kubelet[2398]: I1213 23:07:02.017655 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:02.019685 kubelet[2398]: E1213 23:07:02.019649 2398 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:02.019685 kubelet[2398]: I1213 23:07:02.019677 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:07:02.021439 kubelet[2398]: E1213 23:07:02.021408 2398 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 23:07:02.202330 kubelet[2398]: I1213 23:07:02.202228 2398 apiserver.go:52] "Watching apiserver" Dec 13 23:07:02.210333 kubelet[2398]: I1213 23:07:02.210305 2398 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 23:07:04.081968 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-8.scope)... Dec 13 23:07:04.081983 systemd[1]: Reloading... Dec 13 23:07:04.159196 zram_generator::config[2729]: No configuration found. Dec 13 23:07:04.365597 systemd[1]: Reloading finished in 283 ms. Dec 13 23:07:04.395960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:07:04.426778 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 13 23:07:04.426886 kernel: audit: type=1131 audit(1765667224.424:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:04.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:04.425608 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 23:07:04.425966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:07:04.426055 systemd[1]: kubelet.service: Consumed 960ms CPU time, 126.6M memory peak. Dec 13 23:07:04.427845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:07:04.428000 audit: BPF prog-id=117 op=LOAD Dec 13 23:07:04.428000 audit: BPF prog-id=67 op=UNLOAD Dec 13 23:07:04.429000 audit: BPF prog-id=118 op=LOAD Dec 13 23:07:04.431858 kernel: audit: type=1334 audit(1765667224.428:399): prog-id=117 op=LOAD Dec 13 23:07:04.431927 kernel: audit: type=1334 audit(1765667224.428:400): prog-id=67 op=UNLOAD Dec 13 23:07:04.431947 kernel: audit: type=1334 audit(1765667224.429:401): prog-id=118 op=LOAD Dec 13 23:07:04.431967 kernel: audit: type=1334 audit(1765667224.429:402): prog-id=84 op=UNLOAD Dec 13 23:07:04.429000 audit: BPF prog-id=84 op=UNLOAD Dec 13 23:07:04.432695 kernel: audit: type=1334 audit(1765667224.429:403): prog-id=119 op=LOAD Dec 13 23:07:04.432756 kernel: audit: type=1334 audit(1765667224.430:404): prog-id=120 op=LOAD Dec 13 23:07:04.432775 kernel: audit: type=1334 audit(1765667224.431:405): prog-id=85 op=UNLOAD Dec 13 23:07:04.432790 kernel: audit: type=1334 audit(1765667224.431:406): prog-id=86 op=UNLOAD Dec 13 23:07:04.429000 audit: BPF prog-id=119 op=LOAD Dec 13 23:07:04.430000 audit: BPF prog-id=120 op=LOAD Dec 13 23:07:04.431000 audit: BPF prog-id=85 op=UNLOAD Dec 13 23:07:04.431000 audit: BPF prog-id=86 op=UNLOAD Dec 13 23:07:04.433780 kernel: audit: type=1334 audit(1765667224.432:407): prog-id=121 op=LOAD Dec 13 23:07:04.432000 audit: BPF prog-id=121 op=LOAD Dec 13 23:07:04.433000 audit: BPF prog-id=122 op=LOAD Dec 13 23:07:04.433000 audit: BPF prog-id=82 op=UNLOAD Dec 13 23:07:04.433000 audit: BPF prog-id=83 op=UNLOAD Dec 13 23:07:04.435000 audit: BPF prog-id=123 op=LOAD Dec 13 23:07:04.435000 audit: BPF prog-id=68 op=UNLOAD Dec 13 23:07:04.435000 audit: BPF prog-id=124 op=LOAD Dec 13 23:07:04.435000 audit: BPF prog-id=125 op=LOAD Dec 13 23:07:04.435000 audit: BPF prog-id=69 op=UNLOAD Dec 13 23:07:04.435000 audit: BPF prog-id=70 op=UNLOAD Dec 13 23:07:04.460000 audit: BPF prog-id=126 op=LOAD Dec 13 23:07:04.460000 audit: BPF prog-id=78 op=UNLOAD Dec 13 23:07:04.460000 audit: BPF prog-id=127 op=LOAD Dec 13 23:07:04.460000 audit: BPF prog-id=128 op=LOAD Dec 13 23:07:04.460000 audit: BPF prog-id=79 op=UNLOAD Dec 13 23:07:04.460000 audit: BPF prog-id=80 op=UNLOAD Dec 13 23:07:04.461000 audit: BPF prog-id=129 op=LOAD Dec 13 23:07:04.461000 audit: BPF prog-id=75 op=UNLOAD Dec 13 23:07:04.461000 audit: BPF prog-id=130 op=LOAD Dec 13 23:07:04.461000 audit: BPF prog-id=131 op=LOAD Dec 13 23:07:04.461000 audit: BPF prog-id=76 op=UNLOAD Dec 13 23:07:04.461000 audit: BPF prog-id=77 op=UNLOAD Dec 13 23:07:04.462000 audit: BPF prog-id=132 op=LOAD Dec 13 23:07:04.462000 audit: BPF prog-id=81 op=UNLOAD Dec 13 23:07:04.462000 audit: BPF prog-id=133 op=LOAD Dec 13 23:07:04.462000 audit: BPF prog-id=71 op=UNLOAD Dec 13 23:07:04.463000 audit: BPF prog-id=134 op=LOAD Dec 13 23:07:04.463000 audit: BPF prog-id=72 op=UNLOAD Dec 13 23:07:04.463000 audit: BPF prog-id=135 op=LOAD Dec 13 23:07:04.463000 audit: BPF prog-id=136 op=LOAD Dec 13 23:07:04.463000 audit: BPF prog-id=73 op=UNLOAD Dec 13 23:07:04.463000 audit: BPF prog-id=74 op=UNLOAD Dec 13 23:07:04.590292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:07:04.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:04.605404 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 23:07:04.650188 kubelet[2771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:07:04.650188 kubelet[2771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 23:07:04.650188 kubelet[2771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:07:04.650532 kubelet[2771]: I1213 23:07:04.650230 2771 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 23:07:04.663368 kubelet[2771]: I1213 23:07:04.663325 2771 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 13 23:07:04.663368 kubelet[2771]: I1213 23:07:04.663355 2771 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 23:07:04.663912 kubelet[2771]: I1213 23:07:04.663613 2771 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 23:07:04.665420 kubelet[2771]: I1213 23:07:04.665363 2771 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 13 23:07:04.670618 kubelet[2771]: I1213 23:07:04.670542 2771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 23:07:04.680992 kubelet[2771]: I1213 23:07:04.680943 2771 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 23:07:04.685509 kubelet[2771]: I1213 23:07:04.685460 2771 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 23:07:04.685932 kubelet[2771]: I1213 23:07:04.685886 2771 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 23:07:04.686399 kubelet[2771]: I1213 23:07:04.686015 2771 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 23:07:04.686399 kubelet[2771]: I1213 23:07:04.686354 2771 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 23:07:04.686399 kubelet[2771]: I1213 23:07:04.686366 2771 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 23:07:04.686603 kubelet[2771]: I1213 23:07:04.686589 2771 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:07:04.686939 kubelet[2771]: I1213 23:07:04.686914 2771 kubelet.go:480] "Attempting to sync node with API server" Dec 13 23:07:04.686995 kubelet[2771]: I1213 23:07:04.686948 2771 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 23:07:04.686995 kubelet[2771]: I1213 23:07:04.686971 2771 kubelet.go:386] "Adding apiserver pod source" Dec 13 23:07:04.686995 kubelet[2771]: I1213 23:07:04.686984 2771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 23:07:04.689373 kubelet[2771]: I1213 23:07:04.689343 2771 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 23:07:04.690341 kubelet[2771]: I1213 23:07:04.690289 2771 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 23:07:04.694663 kubelet[2771]: I1213 23:07:04.694637 2771 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 23:07:04.694728 kubelet[2771]: I1213 23:07:04.694692 2771 server.go:1289] "Started kubelet" Dec 13 23:07:04.695217 kubelet[2771]: I1213 23:07:04.695170 2771 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 23:07:04.695837 kubelet[2771]: I1213 23:07:04.695731 2771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 23:07:04.696279 kubelet[2771]: I1213 23:07:04.696220 2771 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 23:07:04.698899 kubelet[2771]: I1213 23:07:04.698871 2771 server.go:317] "Adding debug handlers to kubelet server" Dec 13 23:07:04.699196 kubelet[2771]: I1213 23:07:04.699176 2771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 23:07:04.703802 kubelet[2771]: I1213 23:07:04.703725 2771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 23:07:04.705809 kubelet[2771]: E1213 23:07:04.705768 2771 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 23:07:04.705938 kubelet[2771]: E1213 23:07:04.705908 2771 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:07:04.706033 kubelet[2771]: I1213 23:07:04.705928 2771 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 23:07:04.706261 kubelet[2771]: I1213 23:07:04.705937 2771 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 23:07:04.706541 kubelet[2771]: I1213 23:07:04.706496 2771 reconciler.go:26] "Reconciler: start to sync state" Dec 13 23:07:04.711780 kubelet[2771]: I1213 23:07:04.711750 2771 factory.go:223] Registration of the containerd container factory successfully Dec 13 23:07:04.711780 kubelet[2771]: I1213 23:07:04.711775 2771 factory.go:223] Registration of the systemd container factory successfully Dec 13 23:07:04.711884 kubelet[2771]: I1213 23:07:04.711862 2771 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 23:07:04.730431 kubelet[2771]: I1213 23:07:04.730387 2771 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 13 23:07:04.732558 kubelet[2771]: I1213 23:07:04.732137 2771 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 13 23:07:04.732558 kubelet[2771]: I1213 23:07:04.732164 2771 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 13 23:07:04.732558 kubelet[2771]: I1213 23:07:04.732184 2771 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 23:07:04.732558 kubelet[2771]: I1213 23:07:04.732192 2771 kubelet.go:2436] "Starting kubelet main sync loop" Dec 13 23:07:04.732558 kubelet[2771]: E1213 23:07:04.732254 2771 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 23:07:04.767184 kubelet[2771]: I1213 23:07:04.767155 2771 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 23:07:04.767184 kubelet[2771]: I1213 23:07:04.767177 2771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 23:07:04.767409 kubelet[2771]: I1213 23:07:04.767199 2771 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:07:04.767409 kubelet[2771]: I1213 23:07:04.767364 2771 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 23:07:04.767409 kubelet[2771]: I1213 23:07:04.767376 2771 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 23:07:04.767409 kubelet[2771]: I1213 23:07:04.767392 2771 policy_none.go:49] "None policy: Start" Dec 13 23:07:04.767409 kubelet[2771]: I1213 23:07:04.767401 2771 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 23:07:04.767409 kubelet[2771]: I1213 23:07:04.767410 2771 state_mem.go:35] "Initializing new in-memory state store" Dec 13 23:07:04.767529 kubelet[2771]: I1213 23:07:04.767491 2771 state_mem.go:75] "Updated machine memory state" Dec 13 23:07:04.771794 kubelet[2771]: E1213 23:07:04.771695 2771 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 23:07:04.771919 kubelet[2771]: I1213 23:07:04.771863 2771 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 23:07:04.771919 kubelet[2771]: I1213 23:07:04.771889 2771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 23:07:04.772460 kubelet[2771]: I1213 23:07:04.772430 2771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 23:07:04.774442 kubelet[2771]: E1213 23:07:04.774152 2771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 23:07:04.833350 kubelet[2771]: I1213 23:07:04.833267 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:07:04.833350 kubelet[2771]: I1213 23:07:04.833305 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:04.833550 kubelet[2771]: I1213 23:07:04.833398 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 23:07:04.875570 kubelet[2771]: I1213 23:07:04.875532 2771 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:07:04.881776 kubelet[2771]: I1213 23:07:04.881682 2771 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 23:07:04.882285 kubelet[2771]: I1213 23:07:04.882010 2771 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 23:07:04.908293 kubelet[2771]: I1213 23:07:04.908190 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4d7e2c0c9e1c9e26bebce3bd3b8c301-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d7e2c0c9e1c9e26bebce3bd3b8c301\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:07:04.908293 kubelet[2771]: I1213 23:07:04.908225 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:04.908293 kubelet[2771]: I1213 23:07:04.908247 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:04.908293 kubelet[2771]: I1213 23:07:04.908276 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:04.908293 kubelet[2771]: I1213 23:07:04.908293 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4d7e2c0c9e1c9e26bebce3bd3b8c301-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d7e2c0c9e1c9e26bebce3bd3b8c301\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:07:04.908481 kubelet[2771]: I1213 23:07:04.908308 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4d7e2c0c9e1c9e26bebce3bd3b8c301-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d4d7e2c0c9e1c9e26bebce3bd3b8c301\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:07:04.908481 kubelet[2771]: I1213 23:07:04.908323 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:04.908481 kubelet[2771]: I1213 23:07:04.908347 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:07:04.908481 kubelet[2771]: I1213 23:07:04.908363 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 13 23:07:05.145436 kubelet[2771]: E1213 23:07:05.145163 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:05.145436 kubelet[2771]: E1213 23:07:05.145243 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:05.145436 kubelet[2771]: E1213 23:07:05.145377 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:05.693747 kubelet[2771]: I1213 23:07:05.693709 2771 apiserver.go:52] "Watching apiserver" Dec 13 23:07:05.706594 kubelet[2771]: I1213 23:07:05.706558 2771 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 23:07:05.754481 kubelet[2771]: I1213 23:07:05.754433 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:07:05.754596 kubelet[2771]: E1213 23:07:05.754551 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:05.754596 kubelet[2771]: E1213 23:07:05.754565 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:05.763429 kubelet[2771]: E1213 23:07:05.763374 2771 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 23:07:05.763555 kubelet[2771]: E1213 23:07:05.763543 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:05.776158 kubelet[2771]: I1213 23:07:05.775711 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.775693901 podStartE2EDuration="1.775693901s" podCreationTimestamp="2025-12-13 23:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:07:05.775466762 +0000 UTC m=+1.158675004" watchObservedRunningTime="2025-12-13 23:07:05.775693901 +0000 UTC m=+1.158902143" Dec 13 23:07:05.794024 kubelet[2771]: I1213 23:07:05.793860 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7938425630000001 podStartE2EDuration="1.793842563s" podCreationTimestamp="2025-12-13 23:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:07:05.784857378 +0000 UTC m=+1.168065620" watchObservedRunningTime="2025-12-13 23:07:05.793842563 +0000 UTC m=+1.177050805" Dec 13 23:07:05.804375 kubelet[2771]: I1213 23:07:05.804259 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.804094129 podStartE2EDuration="1.804094129s" podCreationTimestamp="2025-12-13 23:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:07:05.793908466 +0000 UTC m=+1.177116708" watchObservedRunningTime="2025-12-13 23:07:05.804094129 +0000 UTC m=+1.187302371" Dec 13 23:07:06.755839 kubelet[2771]: E1213 23:07:06.755788 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:06.756835 kubelet[2771]: E1213 23:07:06.755924 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:07.757740 kubelet[2771]: E1213 23:07:07.757439 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:07.757740 kubelet[2771]: E1213 23:07:07.757522 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:09.400564 kubelet[2771]: I1213 23:07:09.400519 2771 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 23:07:09.401052 containerd[1603]: time="2025-12-13T23:07:09.401013558Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 23:07:09.401944 kubelet[2771]: I1213 23:07:09.401240 2771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 23:07:09.908902 kubelet[2771]: E1213 23:07:09.908817 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:10.533977 systemd[1]: Created slice kubepods-besteffort-podd5a4b0ff_5e29_4714_9f87_320366db4b64.slice - libcontainer container kubepods-besteffort-podd5a4b0ff_5e29_4714_9f87_320366db4b64.slice. Dec 13 23:07:10.644557 kubelet[2771]: I1213 23:07:10.644086 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5a4b0ff-5e29-4714-9f87-320366db4b64-xtables-lock\") pod \"kube-proxy-vs5b9\" (UID: \"d5a4b0ff-5e29-4714-9f87-320366db4b64\") " pod="kube-system/kube-proxy-vs5b9" Dec 13 23:07:10.644557 kubelet[2771]: I1213 23:07:10.644148 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9t2\" (UniqueName: \"kubernetes.io/projected/d5a4b0ff-5e29-4714-9f87-320366db4b64-kube-api-access-5l9t2\") pod \"kube-proxy-vs5b9\" (UID: \"d5a4b0ff-5e29-4714-9f87-320366db4b64\") " pod="kube-system/kube-proxy-vs5b9" Dec 13 23:07:10.644557 kubelet[2771]: I1213 23:07:10.644207 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5a4b0ff-5e29-4714-9f87-320366db4b64-kube-proxy\") pod \"kube-proxy-vs5b9\" (UID: \"d5a4b0ff-5e29-4714-9f87-320366db4b64\") " pod="kube-system/kube-proxy-vs5b9" Dec 13 23:07:10.644557 kubelet[2771]: I1213 23:07:10.644224 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5a4b0ff-5e29-4714-9f87-320366db4b64-lib-modules\") pod \"kube-proxy-vs5b9\" (UID: \"d5a4b0ff-5e29-4714-9f87-320366db4b64\") " pod="kube-system/kube-proxy-vs5b9" Dec 13 23:07:10.648814 systemd[1]: Created slice kubepods-besteffort-pod2c10598f_24c4_437b_a735_02088f603f81.slice - libcontainer container kubepods-besteffort-pod2c10598f_24c4_437b_a735_02088f603f81.slice. Dec 13 23:07:10.744605 kubelet[2771]: I1213 23:07:10.744555 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c10598f-24c4-437b-a735-02088f603f81-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2w45t\" (UID: \"2c10598f-24c4-437b-a735-02088f603f81\") " pod="tigera-operator/tigera-operator-7dcd859c48-2w45t" Dec 13 23:07:10.744745 kubelet[2771]: I1213 23:07:10.744692 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qskgt\" (UniqueName: \"kubernetes.io/projected/2c10598f-24c4-437b-a735-02088f603f81-kube-api-access-qskgt\") pod \"tigera-operator-7dcd859c48-2w45t\" (UID: \"2c10598f-24c4-437b-a735-02088f603f81\") " pod="tigera-operator/tigera-operator-7dcd859c48-2w45t" Dec 13 23:07:10.846433 kubelet[2771]: E1213 23:07:10.846250 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:10.847114 containerd[1603]: time="2025-12-13T23:07:10.846784305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vs5b9,Uid:d5a4b0ff-5e29-4714-9f87-320366db4b64,Namespace:kube-system,Attempt:0,}" Dec 13 23:07:10.866649 containerd[1603]: time="2025-12-13T23:07:10.866603197Z" level=info msg="connecting to shim cdd225421861a10a9aa294148925a48a797a397b5fd34a348f4097bcfea7b6a0" address="unix:///run/containerd/s/57dbbff3ba12326c3c63ee3c8cb58d138709ea2ccfea48b1a15f832d477208fc" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:10.895279 systemd[1]: Started cri-containerd-cdd225421861a10a9aa294148925a48a797a397b5fd34a348f4097bcfea7b6a0.scope - libcontainer container cdd225421861a10a9aa294148925a48a797a397b5fd34a348f4097bcfea7b6a0. Dec 13 23:07:10.903000 audit: BPF prog-id=137 op=LOAD Dec 13 23:07:10.905168 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 23:07:10.905222 kernel: audit: type=1334 audit(1765667230.903:440): prog-id=137 op=LOAD Dec 13 23:07:10.904000 audit: BPF prog-id=138 op=LOAD Dec 13 23:07:10.906824 kernel: audit: type=1334 audit(1765667230.904:441): prog-id=138 op=LOAD Dec 13 23:07:10.906877 kernel: audit: type=1300 audit(1765667230.904:441): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.904000 audit[2845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.913535 kernel: audit: type=1327 audit(1765667230.904:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.913561 kernel: audit: type=1334 audit(1765667230.904:442): prog-id=138 op=UNLOAD Dec 13 23:07:10.904000 audit: BPF prog-id=138 op=UNLOAD Dec 13 23:07:10.904000 audit[2845]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.917578 kernel: audit: type=1300 audit(1765667230.904:442): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.917731 kernel: audit: type=1327 audit(1765667230.904:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.905000 audit: BPF prog-id=139 op=LOAD Dec 13 23:07:10.921714 kernel: audit: type=1334 audit(1765667230.905:443): prog-id=139 op=LOAD Dec 13 23:07:10.921760 kernel: audit: type=1300 audit(1765667230.905:443): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.905000 audit[2845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.928435 kernel: audit: type=1327 audit(1765667230.905:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.909000 audit: BPF prog-id=140 op=LOAD Dec 13 23:07:10.909000 audit[2845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.913000 audit: BPF prog-id=140 op=UNLOAD Dec 13 23:07:10.913000 audit[2845]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.913000 audit: BPF prog-id=139 op=UNLOAD Dec 13 23:07:10.913000 audit[2845]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.913000 audit: BPF prog-id=141 op=LOAD Dec 13 23:07:10.913000 audit[2845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2835 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:10.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364643232353432313836316131306139616132393431343839323561 Dec 13 23:07:10.939498 containerd[1603]: time="2025-12-13T23:07:10.939463486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vs5b9,Uid:d5a4b0ff-5e29-4714-9f87-320366db4b64,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdd225421861a10a9aa294148925a48a797a397b5fd34a348f4097bcfea7b6a0\"" Dec 13 23:07:10.940485 kubelet[2771]: E1213 23:07:10.940091 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:10.944943 containerd[1603]: time="2025-12-13T23:07:10.944900605Z" level=info msg="CreateContainer within sandbox \"cdd225421861a10a9aa294148925a48a797a397b5fd34a348f4097bcfea7b6a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 23:07:10.951531 containerd[1603]: time="2025-12-13T23:07:10.951487236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2w45t,Uid:2c10598f-24c4-437b-a735-02088f603f81,Namespace:tigera-operator,Attempt:0,}" Dec 13 23:07:10.954746 containerd[1603]: time="2025-12-13T23:07:10.954319370Z" level=info msg="Container 1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:10.962601 containerd[1603]: time="2025-12-13T23:07:10.962566768Z" level=info msg="CreateContainer within sandbox \"cdd225421861a10a9aa294148925a48a797a397b5fd34a348f4097bcfea7b6a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f\"" Dec 13 23:07:10.963266 containerd[1603]: time="2025-12-13T23:07:10.963240009Z" level=info msg="StartContainer for \"1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f\"" Dec 13 23:07:10.964606 containerd[1603]: time="2025-12-13T23:07:10.964579846Z" level=info msg="connecting to shim 1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f" address="unix:///run/containerd/s/57dbbff3ba12326c3c63ee3c8cb58d138709ea2ccfea48b1a15f832d477208fc" protocol=ttrpc version=3 Dec 13 23:07:10.973246 containerd[1603]: time="2025-12-13T23:07:10.973201689Z" level=info msg="connecting to shim 6b4dd2bd329aaf174506eca556b5b5bd3c841724a89ef9cdfc2b995d643f07b9" address="unix:///run/containerd/s/e8bb9b3fdc404ebcb995145527a98fd5cd8237aefa9ca4a2d5e820c1dbcf7768" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:10.983296 systemd[1]: Started cri-containerd-1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f.scope - libcontainer container 1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f. Dec 13 23:07:11.001306 systemd[1]: Started cri-containerd-6b4dd2bd329aaf174506eca556b5b5bd3c841724a89ef9cdfc2b995d643f07b9.scope - libcontainer container 6b4dd2bd329aaf174506eca556b5b5bd3c841724a89ef9cdfc2b995d643f07b9. Dec 13 23:07:11.012000 audit: BPF prog-id=142 op=LOAD Dec 13 23:07:11.013000 audit: BPF prog-id=143 op=LOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.013000 audit: BPF prog-id=143 op=UNLOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.013000 audit: BPF prog-id=144 op=LOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.013000 audit: BPF prog-id=145 op=LOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.013000 audit: BPF prog-id=145 op=UNLOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.013000 audit: BPF prog-id=144 op=UNLOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.013000 audit: BPF prog-id=146 op=LOAD Dec 13 23:07:11.013000 audit[2901]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662346464326264333239616166313734353036656361353536623562 Dec 13 23:07:11.029000 audit: BPF prog-id=147 op=LOAD Dec 13 23:07:11.029000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2835 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132363366646232343639613764393061306666663164656464316663 Dec 13 23:07:11.029000 audit: BPF prog-id=148 op=LOAD Dec 13 23:07:11.029000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2835 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132363366646232343639613764393061306666663164656464316663 Dec 13 23:07:11.030000 audit: BPF prog-id=148 op=UNLOAD Dec 13 23:07:11.030000 audit[2872]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2835 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132363366646232343639613764393061306666663164656464316663 Dec 13 23:07:11.030000 audit: BPF prog-id=147 op=UNLOAD Dec 13 23:07:11.030000 audit[2872]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2835 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132363366646232343639613764393061306666663164656464316663 Dec 13 23:07:11.030000 audit: BPF prog-id=149 op=LOAD Dec 13 23:07:11.030000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2835 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132363366646232343639613764393061306666663164656464316663 Dec 13 23:07:11.038543 containerd[1603]: time="2025-12-13T23:07:11.038413056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2w45t,Uid:2c10598f-24c4-437b-a735-02088f603f81,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b4dd2bd329aaf174506eca556b5b5bd3c841724a89ef9cdfc2b995d643f07b9\"" Dec 13 23:07:11.043958 containerd[1603]: time="2025-12-13T23:07:11.043792382Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 13 23:07:11.062302 containerd[1603]: time="2025-12-13T23:07:11.062266322Z" level=info msg="StartContainer for \"1263fdb2469a7d90a0fff1dedd1fc4cf353f5d16a05a6cd852bed3098e3cef8f\" returns successfully" Dec 13 23:07:11.199000 audit[2983]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.199000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec93c6d0 a2=0 a3=1 items=0 ppid=2912 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 23:07:11.200000 audit[2984]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.200000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff98dc3d0 a2=0 a3=1 items=0 ppid=2912 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 23:07:11.202000 audit[2989]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.202000 audit[2989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe695da90 a2=0 a3=1 items=0 ppid=2912 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 23:07:11.204000 audit[2988]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.204000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcca95e10 a2=0 a3=1 items=0 ppid=2912 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 23:07:11.205000 audit[2990]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.205000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1353730 a2=0 a3=1 items=0 ppid=2912 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 23:07:11.207000 audit[2991]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.207000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe14ff580 a2=0 a3=1 items=0 ppid=2912 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.207000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 23:07:11.305000 audit[2992]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.305000 audit[2992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe15b4530 a2=0 a3=1 items=0 ppid=2912 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 23:07:11.307000 audit[2994]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.307000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc465fb00 a2=0 a3=1 items=0 ppid=2912 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 23:07:11.311000 audit[2997]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.311000 audit[2997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc3cf8320 a2=0 a3=1 items=0 ppid=2912 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 23:07:11.312000 audit[2998]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.312000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9295fb0 a2=0 a3=1 items=0 ppid=2912 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 23:07:11.314000 audit[3000]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.314000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd45a10f0 a2=0 a3=1 items=0 ppid=2912 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 23:07:11.315000 audit[3001]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.315000 audit[3001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1802a30 a2=0 a3=1 items=0 ppid=2912 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 23:07:11.318000 audit[3003]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.318000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe3d26990 a2=0 a3=1 items=0 ppid=2912 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 23:07:11.321000 audit[3006]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.321000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffef9feda0 a2=0 a3=1 items=0 ppid=2912 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.321000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 23:07:11.323000 audit[3007]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.323000 audit[3007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcabc1df0 a2=0 a3=1 items=0 ppid=2912 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 23:07:11.325000 audit[3009]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.325000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff734da10 a2=0 a3=1 items=0 ppid=2912 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 23:07:11.326000 audit[3010]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.326000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc31d3aa0 a2=0 a3=1 items=0 ppid=2912 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 23:07:11.329000 audit[3012]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.329000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcdd64020 a2=0 a3=1 items=0 ppid=2912 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 23:07:11.333000 audit[3015]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.333000 audit[3015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe138b790 a2=0 a3=1 items=0 ppid=2912 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 23:07:11.336000 audit[3018]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.336000 audit[3018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc388cf50 a2=0 a3=1 items=0 ppid=2912 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.336000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 23:07:11.337000 audit[3019]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.337000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdac040c0 a2=0 a3=1 items=0 ppid=2912 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 23:07:11.340000 audit[3021]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.340000 audit[3021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffc5603d0 a2=0 a3=1 items=0 ppid=2912 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 23:07:11.343000 audit[3024]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.343000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe2ad0cb0 a2=0 a3=1 items=0 ppid=2912 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 23:07:11.344000 audit[3025]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.344000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5f13830 a2=0 a3=1 items=0 ppid=2912 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 23:07:11.346000 audit[3027]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 23:07:11.346000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd3bebae0 a2=0 a3=1 items=0 ppid=2912 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 23:07:11.365000 audit[3033]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:11.365000 audit[3033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd4eaf40 a2=0 a3=1 items=0 ppid=2912 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:11.381000 audit[3033]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:11.381000 audit[3033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffdd4eaf40 a2=0 a3=1 items=0 ppid=2912 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:11.381000 audit[3038]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.381000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffea1acde0 a2=0 a3=1 items=0 ppid=2912 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 23:07:11.384000 audit[3040]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.384000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffde55e530 a2=0 a3=1 items=0 ppid=2912 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 23:07:11.388000 audit[3043]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.388000 audit[3043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd5d2a0e0 a2=0 a3=1 items=0 ppid=2912 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 23:07:11.389000 audit[3044]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.389000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4fa2d80 a2=0 a3=1 items=0 ppid=2912 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 23:07:11.391000 audit[3046]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.391000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffed58d380 a2=0 a3=1 items=0 ppid=2912 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 23:07:11.392000 audit[3047]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.392000 audit[3047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcae99fe0 a2=0 a3=1 items=0 ppid=2912 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 23:07:11.395000 audit[3049]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.395000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff2b4df40 a2=0 a3=1 items=0 ppid=2912 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 23:07:11.398000 audit[3052]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.398000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe8d7fda0 a2=0 a3=1 items=0 ppid=2912 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 23:07:11.399000 audit[3053]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.399000 audit[3053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffdfbd040 a2=0 a3=1 items=0 ppid=2912 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 23:07:11.401000 audit[3055]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.401000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffffe37730 a2=0 a3=1 items=0 ppid=2912 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 23:07:11.402000 audit[3056]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.402000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe8f6250 a2=0 a3=1 items=0 ppid=2912 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 23:07:11.404000 audit[3058]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.404000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5103b40 a2=0 a3=1 items=0 ppid=2912 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 23:07:11.408000 audit[3061]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.408000 audit[3061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea154090 a2=0 a3=1 items=0 ppid=2912 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 23:07:11.412000 audit[3064]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.412000 audit[3064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff18673c0 a2=0 a3=1 items=0 ppid=2912 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 23:07:11.412000 audit[3065]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.412000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdfe193a0 a2=0 a3=1 items=0 ppid=2912 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 23:07:11.415000 audit[3067]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.415000 audit[3067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff7b6b530 a2=0 a3=1 items=0 ppid=2912 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 23:07:11.418000 audit[3070]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.418000 audit[3070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc30f6a0 a2=0 a3=1 items=0 ppid=2912 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 23:07:11.419000 audit[3071]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.419000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6a66440 a2=0 a3=1 items=0 ppid=2912 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 23:07:11.421000 audit[3073]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.421000 audit[3073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe91aa760 a2=0 a3=1 items=0 ppid=2912 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 23:07:11.423000 audit[3074]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.423000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1d1fff0 a2=0 a3=1 items=0 ppid=2912 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 23:07:11.425000 audit[3076]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.425000 audit[3076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff64e56a0 a2=0 a3=1 items=0 ppid=2912 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 23:07:11.428000 audit[3079]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 23:07:11.428000 audit[3079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc20affd0 a2=0 a3=1 items=0 ppid=2912 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 23:07:11.431000 audit[3081]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 23:07:11.431000 audit[3081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd5ddd2d0 a2=0 a3=1 items=0 ppid=2912 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.431000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:11.432000 audit[3081]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 23:07:11.432000 audit[3081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd5ddd2d0 a2=0 a3=1 items=0 ppid=2912 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:11.432000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:11.765972 kubelet[2771]: E1213 23:07:11.765922 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:11.775506 kubelet[2771]: I1213 23:07:11.775448 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vs5b9" podStartSLOduration=1.775436198 podStartE2EDuration="1.775436198s" podCreationTimestamp="2025-12-13 23:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:07:11.775081099 +0000 UTC m=+7.158289341" watchObservedRunningTime="2025-12-13 23:07:11.775436198 +0000 UTC m=+7.158644440" Dec 13 23:07:12.137137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006237993.mount: Deactivated successfully. Dec 13 23:07:12.620992 containerd[1603]: time="2025-12-13T23:07:12.620938172Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:12.621628 containerd[1603]: time="2025-12-13T23:07:12.621579227Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 13 23:07:12.622655 containerd[1603]: time="2025-12-13T23:07:12.622623477Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:12.625128 containerd[1603]: time="2025-12-13T23:07:12.625091679Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:12.626059 containerd[1603]: time="2025-12-13T23:07:12.625726609Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.581898166s" Dec 13 23:07:12.626059 containerd[1603]: time="2025-12-13T23:07:12.625758828Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 13 23:07:12.631956 containerd[1603]: time="2025-12-13T23:07:12.631918546Z" level=info msg="CreateContainer within sandbox \"6b4dd2bd329aaf174506eca556b5b5bd3c841724a89ef9cdfc2b995d643f07b9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 23:07:12.639777 containerd[1603]: time="2025-12-13T23:07:12.639276844Z" level=info msg="Container f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:12.646595 containerd[1603]: time="2025-12-13T23:07:12.646542488Z" level=info msg="CreateContainer within sandbox \"6b4dd2bd329aaf174506eca556b5b5bd3c841724a89ef9cdfc2b995d643f07b9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388\"" Dec 13 23:07:12.648691 containerd[1603]: time="2025-12-13T23:07:12.648648479Z" level=info msg="StartContainer for \"f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388\"" Dec 13 23:07:12.650928 containerd[1603]: time="2025-12-13T23:07:12.650825830Z" level=info msg="connecting to shim f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388" address="unix:///run/containerd/s/e8bb9b3fdc404ebcb995145527a98fd5cd8237aefa9ca4a2d5e820c1dbcf7768" protocol=ttrpc version=3 Dec 13 23:07:12.685343 systemd[1]: Started cri-containerd-f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388.scope - libcontainer container f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388. Dec 13 23:07:12.694000 audit: BPF prog-id=150 op=LOAD Dec 13 23:07:12.695000 audit: BPF prog-id=151 op=LOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.695000 audit: BPF prog-id=151 op=UNLOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.695000 audit: BPF prog-id=152 op=LOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.695000 audit: BPF prog-id=153 op=LOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.695000 audit: BPF prog-id=153 op=UNLOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.695000 audit: BPF prog-id=152 op=UNLOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.695000 audit: BPF prog-id=154 op=LOAD Dec 13 23:07:12.695000 audit[3090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2886 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:12.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632393533646232393636363239363039383835633734396539356162 Dec 13 23:07:12.715078 containerd[1603]: time="2025-12-13T23:07:12.715042621Z" level=info msg="StartContainer for \"f2953db2966629609885c749e95ab315b104c8fab3f1225cd2016cbcc5e54388\" returns successfully" Dec 13 23:07:17.304915 kubelet[2771]: E1213 23:07:17.304880 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:17.318314 kubelet[2771]: E1213 23:07:17.318270 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:17.329769 kubelet[2771]: I1213 23:07:17.329672 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2w45t" podStartSLOduration=5.743881305 podStartE2EDuration="7.329655151s" podCreationTimestamp="2025-12-13 23:07:10 +0000 UTC" firstStartedPulling="2025-12-13 23:07:11.043341463 +0000 UTC m=+6.426549705" lastFinishedPulling="2025-12-13 23:07:12.629115309 +0000 UTC m=+8.012323551" observedRunningTime="2025-12-13 23:07:12.779595888 +0000 UTC m=+8.162804090" watchObservedRunningTime="2025-12-13 23:07:17.329655151 +0000 UTC m=+12.712863433" Dec 13 23:07:17.778482 kubelet[2771]: E1213 23:07:17.778450 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:17.778616 kubelet[2771]: E1213 23:07:17.778533 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:18.085350 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 13 23:07:18.085480 kernel: audit: type=1106 audit(1765667238.082:520): pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:07:18.082000 audit[1830]: USER_END pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:07:18.083625 sudo[1830]: pam_unix(sudo:session): session closed for user root Dec 13 23:07:18.082000 audit[1830]: CRED_DISP pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:07:18.090268 sshd[1829]: Connection closed by 10.0.0.1 port 33442 Dec 13 23:07:18.091438 kernel: audit: type=1104 audit(1765667238.082:521): pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 23:07:18.091254 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:18.091000 audit[1825]: USER_END pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:18.092000 audit[1825]: CRED_DISP pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:18.098417 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:33442.service: Deactivated successfully. Dec 13 23:07:18.100482 kernel: audit: type=1106 audit(1765667238.091:522): pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:18.100548 kernel: audit: type=1104 audit(1765667238.092:523): pid=1825 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:18.100924 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 23:07:18.101773 systemd[1]: session-8.scope: Consumed 7.090s CPU time, 185.3M memory peak. Dec 13 23:07:18.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:33442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:18.103074 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Dec 13 23:07:18.105207 kernel: audit: type=1131 audit(1765667238.098:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:33442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:18.105325 systemd-logind[1585]: Removed session 8. Dec 13 23:07:19.679000 audit[3183]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:19.679000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffea4dee10 a2=0 a3=1 items=0 ppid=2912 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:19.689479 kernel: audit: type=1325 audit(1765667239.679:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:19.689661 kernel: audit: type=1300 audit(1765667239.679:525): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffea4dee10 a2=0 a3=1 items=0 ppid=2912 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:19.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:19.691921 kernel: audit: type=1327 audit(1765667239.679:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:19.691968 kernel: audit: type=1325 audit(1765667239.684:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:19.684000 audit[3183]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:19.694116 kernel: audit: type=1300 audit(1765667239.684:526): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea4dee10 a2=0 a3=1 items=0 ppid=2912 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:19.684000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea4dee10 a2=0 a3=1 items=0 ppid=2912 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:19.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:19.708000 audit[3185]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:19.708000 audit[3185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffeedc2f10 a2=0 a3=1 items=0 ppid=2912 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:19.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:19.714000 audit[3185]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:19.714000 audit[3185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeedc2f10 a2=0 a3=1 items=0 ppid=2912 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:19.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:19.918629 kubelet[2771]: E1213 23:07:19.918377 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:22.782000 audit[3187]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:22.782000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcc5913c0 a2=0 a3=1 items=0 ppid=2912 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:22.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:22.787000 audit[3187]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:22.787000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc5913c0 a2=0 a3=1 items=0 ppid=2912 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:22.787000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:22.798000 audit[3189]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:22.798000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffff8c3c10 a2=0 a3=1 items=0 ppid=2912 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:22.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:22.805000 audit[3189]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:22.805000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff8c3c10 a2=0 a3=1 items=0 ppid=2912 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:22.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:23.501488 update_engine[1586]: I20251213 23:07:23.501420 1586 update_attempter.cc:509] Updating boot flags... Dec 13 23:07:23.819000 audit[3209]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:23.821186 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 23:07:23.821234 kernel: audit: type=1325 audit(1765667243.819:533): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:23.819000 audit[3209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd3a0ff40 a2=0 a3=1 items=0 ppid=2912 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:23.826807 kernel: audit: type=1300 audit(1765667243.819:533): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd3a0ff40 a2=0 a3=1 items=0 ppid=2912 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:23.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:23.828799 kernel: audit: type=1327 audit(1765667243.819:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:23.829000 audit[3209]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:23.829000 audit[3209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3a0ff40 a2=0 a3=1 items=0 ppid=2912 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:23.837222 kernel: audit: type=1325 audit(1765667243.829:534): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:23.837296 kernel: audit: type=1300 audit(1765667243.829:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3a0ff40 a2=0 a3=1 items=0 ppid=2912 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:23.837332 kernel: audit: type=1327 audit(1765667243.829:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:23.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:25.179000 audit[3213]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:25.179000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd8874df0 a2=0 a3=1 items=0 ppid=2912 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.187518 kernel: audit: type=1325 audit(1765667245.179:535): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:25.187665 kernel: audit: type=1300 audit(1765667245.179:535): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd8874df0 a2=0 a3=1 items=0 ppid=2912 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:25.190153 kernel: audit: type=1327 audit(1765667245.179:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:25.188000 audit[3213]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:25.194072 kernel: audit: type=1325 audit(1765667245.188:536): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:25.188000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd8874df0 a2=0 a3=1 items=0 ppid=2912 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:25.212596 systemd[1]: Created slice kubepods-besteffort-pod12e1a4ef_33f9_4deb_8bc1_05eccca50ff4.slice - libcontainer container kubepods-besteffort-pod12e1a4ef_33f9_4deb_8bc1_05eccca50ff4.slice. Dec 13 23:07:25.249514 kubelet[2771]: I1213 23:07:25.249473 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12e1a4ef-33f9-4deb-8bc1-05eccca50ff4-tigera-ca-bundle\") pod \"calico-typha-694f9b6c78-gclm8\" (UID: \"12e1a4ef-33f9-4deb-8bc1-05eccca50ff4\") " pod="calico-system/calico-typha-694f9b6c78-gclm8" Dec 13 23:07:25.249514 kubelet[2771]: I1213 23:07:25.249519 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/12e1a4ef-33f9-4deb-8bc1-05eccca50ff4-typha-certs\") pod \"calico-typha-694f9b6c78-gclm8\" (UID: \"12e1a4ef-33f9-4deb-8bc1-05eccca50ff4\") " pod="calico-system/calico-typha-694f9b6c78-gclm8" Dec 13 23:07:25.249976 kubelet[2771]: I1213 23:07:25.249541 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xtc\" (UniqueName: \"kubernetes.io/projected/12e1a4ef-33f9-4deb-8bc1-05eccca50ff4-kube-api-access-28xtc\") pod \"calico-typha-694f9b6c78-gclm8\" (UID: \"12e1a4ef-33f9-4deb-8bc1-05eccca50ff4\") " pod="calico-system/calico-typha-694f9b6c78-gclm8" Dec 13 23:07:25.385772 systemd[1]: Created slice kubepods-besteffort-pod2921b13c_b9cf_495b_8181_b0e52e0cb2da.slice - libcontainer container kubepods-besteffort-pod2921b13c_b9cf_495b_8181_b0e52e0cb2da.slice. Dec 13 23:07:25.451293 kubelet[2771]: I1213 23:07:25.451170 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-var-run-calico\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451293 kubelet[2771]: I1213 23:07:25.451221 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-xtables-lock\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451293 kubelet[2771]: I1213 23:07:25.451244 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-flexvol-driver-host\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451293 kubelet[2771]: I1213 23:07:25.451263 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-var-lib-calico\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451293 kubelet[2771]: I1213 23:07:25.451279 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-cni-bin-dir\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451489 kubelet[2771]: I1213 23:07:25.451295 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-cni-log-dir\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451489 kubelet[2771]: I1213 23:07:25.451310 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-lib-modules\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451489 kubelet[2771]: I1213 23:07:25.451331 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2921b13c-b9cf-495b-8181-b0e52e0cb2da-node-certs\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451489 kubelet[2771]: I1213 23:07:25.451345 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2921b13c-b9cf-495b-8181-b0e52e0cb2da-tigera-ca-bundle\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451489 kubelet[2771]: I1213 23:07:25.451361 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-cni-net-dir\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451593 kubelet[2771]: I1213 23:07:25.451377 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvpj\" (UniqueName: \"kubernetes.io/projected/2921b13c-b9cf-495b-8181-b0e52e0cb2da-kube-api-access-qgvpj\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.451593 kubelet[2771]: I1213 23:07:25.451392 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2921b13c-b9cf-495b-8181-b0e52e0cb2da-policysync\") pod \"calico-node-fhcw6\" (UID: \"2921b13c-b9cf-495b-8181-b0e52e0cb2da\") " pod="calico-system/calico-node-fhcw6" Dec 13 23:07:25.515356 kubelet[2771]: E1213 23:07:25.515274 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:25.517884 containerd[1603]: time="2025-12-13T23:07:25.517844528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-694f9b6c78-gclm8,Uid:12e1a4ef-33f9-4deb-8bc1-05eccca50ff4,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:25.550790 containerd[1603]: time="2025-12-13T23:07:25.550734312Z" level=info msg="connecting to shim 74e0f8bb41be9fefceb48bfed37889e1e952235d4b941fad154e799328d90d94" address="unix:///run/containerd/s/3a6353a1fb6ba8f246c1343c635ecbb391ef6e8316fb5cb6f033bd30eb573dd5" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:25.555392 kubelet[2771]: E1213 23:07:25.555358 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.555392 kubelet[2771]: W1213 23:07:25.555380 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.555392 kubelet[2771]: E1213 23:07:25.555402 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.557507 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.560497 kubelet[2771]: W1213 23:07:25.557522 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.557537 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.557913 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.560497 kubelet[2771]: W1213 23:07:25.557924 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.557935 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.558416 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.560497 kubelet[2771]: W1213 23:07:25.558429 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.558440 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.560497 kubelet[2771]: E1213 23:07:25.558673 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.560682 kubelet[2771]: W1213 23:07:25.558683 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.560682 kubelet[2771]: E1213 23:07:25.558796 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.560682 kubelet[2771]: E1213 23:07:25.560020 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.560682 kubelet[2771]: W1213 23:07:25.560033 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.560682 kubelet[2771]: E1213 23:07:25.560063 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.560778 kubelet[2771]: E1213 23:07:25.560723 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.560778 kubelet[2771]: W1213 23:07:25.560736 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.560821 kubelet[2771]: E1213 23:07:25.560804 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.561927 kubelet[2771]: E1213 23:07:25.561159 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.561927 kubelet[2771]: W1213 23:07:25.561173 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.561927 kubelet[2771]: E1213 23:07:25.561184 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.561927 kubelet[2771]: E1213 23:07:25.561709 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.561927 kubelet[2771]: W1213 23:07:25.561723 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.561927 kubelet[2771]: E1213 23:07:25.561733 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.563564 kubelet[2771]: E1213 23:07:25.563535 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.563564 kubelet[2771]: W1213 23:07:25.563555 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.563564 kubelet[2771]: E1213 23:07:25.563569 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.563813 kubelet[2771]: E1213 23:07:25.563794 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.563813 kubelet[2771]: W1213 23:07:25.563808 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.563909 kubelet[2771]: E1213 23:07:25.563817 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.564507 kubelet[2771]: E1213 23:07:25.564480 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.564565 kubelet[2771]: W1213 23:07:25.564516 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.564565 kubelet[2771]: E1213 23:07:25.564530 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.565554 kubelet[2771]: E1213 23:07:25.565518 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.565554 kubelet[2771]: W1213 23:07:25.565535 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.565554 kubelet[2771]: E1213 23:07:25.565548 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.567457 kubelet[2771]: E1213 23:07:25.567425 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.567638 kubelet[2771]: W1213 23:07:25.567464 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.567638 kubelet[2771]: E1213 23:07:25.567477 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.568332 kubelet[2771]: E1213 23:07:25.568302 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.568332 kubelet[2771]: W1213 23:07:25.568318 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.568332 kubelet[2771]: E1213 23:07:25.568336 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.568947 kubelet[2771]: E1213 23:07:25.568774 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.568947 kubelet[2771]: W1213 23:07:25.568785 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.568947 kubelet[2771]: E1213 23:07:25.568795 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.569360 kubelet[2771]: E1213 23:07:25.569017 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.569360 kubelet[2771]: W1213 23:07:25.569026 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.569360 kubelet[2771]: E1213 23:07:25.569035 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.570792 kubelet[2771]: E1213 23:07:25.570766 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.570792 kubelet[2771]: W1213 23:07:25.570783 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.570792 kubelet[2771]: E1213 23:07:25.570797 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.575665 kubelet[2771]: E1213 23:07:25.574199 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.575665 kubelet[2771]: W1213 23:07:25.574216 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.575665 kubelet[2771]: E1213 23:07:25.574229 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.575665 kubelet[2771]: E1213 23:07:25.575539 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:25.576361 kubelet[2771]: E1213 23:07:25.576093 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.576361 kubelet[2771]: W1213 23:07:25.576353 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.576451 kubelet[2771]: E1213 23:07:25.576374 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.576717 kubelet[2771]: E1213 23:07:25.576580 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.576717 kubelet[2771]: W1213 23:07:25.576596 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.576717 kubelet[2771]: E1213 23:07:25.576605 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.586263 kubelet[2771]: E1213 23:07:25.585426 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.586263 kubelet[2771]: W1213 23:07:25.585452 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.586263 kubelet[2771]: E1213 23:07:25.585470 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.592515 kubelet[2771]: E1213 23:07:25.592355 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.592515 kubelet[2771]: W1213 23:07:25.592378 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.592515 kubelet[2771]: E1213 23:07:25.592398 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.610350 systemd[1]: Started cri-containerd-74e0f8bb41be9fefceb48bfed37889e1e952235d4b941fad154e799328d90d94.scope - libcontainer container 74e0f8bb41be9fefceb48bfed37889e1e952235d4b941fad154e799328d90d94. Dec 13 23:07:25.619000 audit: BPF prog-id=155 op=LOAD Dec 13 23:07:25.619000 audit: BPF prog-id=156 op=LOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.619000 audit: BPF prog-id=156 op=UNLOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.619000 audit: BPF prog-id=157 op=LOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.619000 audit: BPF prog-id=158 op=LOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.619000 audit: BPF prog-id=158 op=UNLOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.619000 audit: BPF prog-id=157 op=UNLOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.619000 audit: BPF prog-id=159 op=LOAD Dec 13 23:07:25.619000 audit[3259]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3224 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734653066386262343162653966656663656234386266656433373838 Dec 13 23:07:25.639052 kubelet[2771]: E1213 23:07:25.639024 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.639052 kubelet[2771]: W1213 23:07:25.639047 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.639208 kubelet[2771]: E1213 23:07:25.639066 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.639269 kubelet[2771]: E1213 23:07:25.639246 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.640812 kubelet[2771]: W1213 23:07:25.639259 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.640812 kubelet[2771]: E1213 23:07:25.640812 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.641203 kubelet[2771]: E1213 23:07:25.641012 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.641203 kubelet[2771]: W1213 23:07:25.641025 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.641203 kubelet[2771]: E1213 23:07:25.641037 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.641203 kubelet[2771]: E1213 23:07:25.641182 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.641203 kubelet[2771]: W1213 23:07:25.641189 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.641203 kubelet[2771]: E1213 23:07:25.641197 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641369 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.642481 kubelet[2771]: W1213 23:07:25.641377 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641386 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641520 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.642481 kubelet[2771]: W1213 23:07:25.641527 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641534 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641660 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.642481 kubelet[2771]: W1213 23:07:25.641667 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641674 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.642481 kubelet[2771]: E1213 23:07:25.641805 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644202 kubelet[2771]: W1213 23:07:25.641812 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644202 kubelet[2771]: E1213 23:07:25.641819 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644202 kubelet[2771]: E1213 23:07:25.641981 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644202 kubelet[2771]: W1213 23:07:25.641988 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644202 kubelet[2771]: E1213 23:07:25.641996 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644202 kubelet[2771]: E1213 23:07:25.642123 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644202 kubelet[2771]: W1213 23:07:25.642131 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644202 kubelet[2771]: E1213 23:07:25.642138 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644202 kubelet[2771]: E1213 23:07:25.643227 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644202 kubelet[2771]: W1213 23:07:25.643237 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643246 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643425 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644398 kubelet[2771]: W1213 23:07:25.643432 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643440 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643604 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644398 kubelet[2771]: W1213 23:07:25.643613 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643620 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643767 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644398 kubelet[2771]: W1213 23:07:25.643775 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644398 kubelet[2771]: E1213 23:07:25.643784 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.643917 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644636 kubelet[2771]: W1213 23:07:25.643924 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.643931 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.644057 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644636 kubelet[2771]: W1213 23:07:25.644064 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.644073 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.644230 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644636 kubelet[2771]: W1213 23:07:25.644238 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.644245 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644636 kubelet[2771]: E1213 23:07:25.644413 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644819 kubelet[2771]: W1213 23:07:25.644421 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644819 kubelet[2771]: E1213 23:07:25.644430 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644819 kubelet[2771]: E1213 23:07:25.644554 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644819 kubelet[2771]: W1213 23:07:25.644561 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644819 kubelet[2771]: E1213 23:07:25.644568 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.644819 kubelet[2771]: E1213 23:07:25.644697 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.644819 kubelet[2771]: W1213 23:07:25.644703 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.644819 kubelet[2771]: E1213 23:07:25.644710 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.651148 containerd[1603]: time="2025-12-13T23:07:25.651083955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-694f9b6c78-gclm8,Uid:12e1a4ef-33f9-4deb-8bc1-05eccca50ff4,Namespace:calico-system,Attempt:0,} returns sandbox id \"74e0f8bb41be9fefceb48bfed37889e1e952235d4b941fad154e799328d90d94\"" Dec 13 23:07:25.654633 kubelet[2771]: E1213 23:07:25.654499 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.654633 kubelet[2771]: W1213 23:07:25.654612 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.654633 kubelet[2771]: E1213 23:07:25.654630 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.654770 kubelet[2771]: I1213 23:07:25.654660 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e4691c-7de1-4668-91bc-eef5c31432bb-kubelet-dir\") pod \"csi-node-driver-8xf56\" (UID: \"68e4691c-7de1-4668-91bc-eef5c31432bb\") " pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:25.655114 kubelet[2771]: E1213 23:07:25.655029 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.655385 kubelet[2771]: W1213 23:07:25.655182 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.655385 kubelet[2771]: E1213 23:07:25.655199 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.656377 kubelet[2771]: E1213 23:07:25.656341 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:25.656847 kubelet[2771]: E1213 23:07:25.656825 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.656847 kubelet[2771]: W1213 23:07:25.656841 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.656942 kubelet[2771]: E1213 23:07:25.656855 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.658937 kubelet[2771]: E1213 23:07:25.658717 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.658937 kubelet[2771]: W1213 23:07:25.658736 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.658937 kubelet[2771]: E1213 23:07:25.658748 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.658937 kubelet[2771]: I1213 23:07:25.658808 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhh2\" (UniqueName: \"kubernetes.io/projected/68e4691c-7de1-4668-91bc-eef5c31432bb-kube-api-access-vfhh2\") pod \"csi-node-driver-8xf56\" (UID: \"68e4691c-7de1-4668-91bc-eef5c31432bb\") " pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:25.659446 containerd[1603]: time="2025-12-13T23:07:25.659134921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 13 23:07:25.659767 kubelet[2771]: E1213 23:07:25.659716 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.659767 kubelet[2771]: W1213 23:07:25.659736 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.659767 kubelet[2771]: E1213 23:07:25.659750 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.659846 kubelet[2771]: I1213 23:07:25.659780 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68e4691c-7de1-4668-91bc-eef5c31432bb-socket-dir\") pod \"csi-node-driver-8xf56\" (UID: \"68e4691c-7de1-4668-91bc-eef5c31432bb\") " pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:25.660723 kubelet[2771]: E1213 23:07:25.660687 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.660723 kubelet[2771]: W1213 23:07:25.660709 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.660723 kubelet[2771]: E1213 23:07:25.660721 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.660903 kubelet[2771]: I1213 23:07:25.660848 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68e4691c-7de1-4668-91bc-eef5c31432bb-varrun\") pod \"csi-node-driver-8xf56\" (UID: \"68e4691c-7de1-4668-91bc-eef5c31432bb\") " pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:25.664148 kubelet[2771]: E1213 23:07:25.663477 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.664148 kubelet[2771]: W1213 23:07:25.663498 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.664400 kubelet[2771]: E1213 23:07:25.663511 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.664739 kubelet[2771]: E1213 23:07:25.664720 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.664739 kubelet[2771]: W1213 23:07:25.664735 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.664831 kubelet[2771]: E1213 23:07:25.664747 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.665685 kubelet[2771]: E1213 23:07:25.665661 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.665685 kubelet[2771]: W1213 23:07:25.665678 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.665685 kubelet[2771]: E1213 23:07:25.665690 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.666675 kubelet[2771]: E1213 23:07:25.666648 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.666675 kubelet[2771]: W1213 23:07:25.666665 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.666675 kubelet[2771]: E1213 23:07:25.666677 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.666796 kubelet[2771]: I1213 23:07:25.666727 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68e4691c-7de1-4668-91bc-eef5c31432bb-registration-dir\") pod \"csi-node-driver-8xf56\" (UID: \"68e4691c-7de1-4668-91bc-eef5c31432bb\") " pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:25.668361 kubelet[2771]: E1213 23:07:25.668307 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.668457 kubelet[2771]: W1213 23:07:25.668402 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.668457 kubelet[2771]: E1213 23:07:25.668418 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.668985 kubelet[2771]: E1213 23:07:25.668939 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.668985 kubelet[2771]: W1213 23:07:25.668955 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.668985 kubelet[2771]: E1213 23:07:25.668967 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.669427 kubelet[2771]: E1213 23:07:25.669391 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.669427 kubelet[2771]: W1213 23:07:25.669408 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.669427 kubelet[2771]: E1213 23:07:25.669420 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.670126 kubelet[2771]: E1213 23:07:25.669984 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.670126 kubelet[2771]: W1213 23:07:25.669999 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.670126 kubelet[2771]: E1213 23:07:25.670012 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.670736 kubelet[2771]: E1213 23:07:25.670227 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.670736 kubelet[2771]: W1213 23:07:25.670246 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.670736 kubelet[2771]: E1213 23:07:25.670260 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.688592 kubelet[2771]: E1213 23:07:25.688542 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:25.689059 containerd[1603]: time="2025-12-13T23:07:25.689022422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhcw6,Uid:2921b13c-b9cf-495b-8181-b0e52e0cb2da,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:25.706064 containerd[1603]: time="2025-12-13T23:07:25.705949395Z" level=info msg="connecting to shim f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15" address="unix:///run/containerd/s/531ca2ca35da9d0809e0a5134bd13b0826a4f8ad31fcd74d2e17deb15e7b6610" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:25.733363 systemd[1]: Started cri-containerd-f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15.scope - libcontainer container f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15. Dec 13 23:07:25.745000 audit: BPF prog-id=160 op=LOAD Dec 13 23:07:25.745000 audit: BPF prog-id=161 op=LOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.745000 audit: BPF prog-id=161 op=UNLOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.745000 audit: BPF prog-id=162 op=LOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.745000 audit: BPF prog-id=163 op=LOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.745000 audit: BPF prog-id=163 op=UNLOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.745000 audit: BPF prog-id=162 op=UNLOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.745000 audit: BPF prog-id=164 op=LOAD Dec 13 23:07:25.745000 audit[3351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3340 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:25.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303866323832656165363933623665643638663665616565386233 Dec 13 23:07:25.759611 containerd[1603]: time="2025-12-13T23:07:25.759502650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhcw6,Uid:2921b13c-b9cf-495b-8181-b0e52e0cb2da,Namespace:calico-system,Attempt:0,} returns sandbox id \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\"" Dec 13 23:07:25.760341 kubelet[2771]: E1213 23:07:25.760310 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:25.771620 kubelet[2771]: E1213 23:07:25.771586 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.771620 kubelet[2771]: W1213 23:07:25.771605 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.771620 kubelet[2771]: E1213 23:07:25.771621 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.772145 kubelet[2771]: E1213 23:07:25.771848 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.772145 kubelet[2771]: W1213 23:07:25.771862 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.772145 kubelet[2771]: E1213 23:07:25.771871 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.772145 kubelet[2771]: E1213 23:07:25.772041 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.772145 kubelet[2771]: W1213 23:07:25.772049 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.772145 kubelet[2771]: E1213 23:07:25.772057 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.772293 kubelet[2771]: E1213 23:07:25.772259 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.772293 kubelet[2771]: W1213 23:07:25.772270 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.772293 kubelet[2771]: E1213 23:07:25.772278 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.773119 kubelet[2771]: E1213 23:07:25.772887 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.773119 kubelet[2771]: W1213 23:07:25.772904 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.773119 kubelet[2771]: E1213 23:07:25.772917 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.773119 kubelet[2771]: E1213 23:07:25.773095 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.773119 kubelet[2771]: W1213 23:07:25.773127 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.773267 kubelet[2771]: E1213 23:07:25.773136 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.773345 kubelet[2771]: E1213 23:07:25.773324 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.773345 kubelet[2771]: W1213 23:07:25.773340 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.773398 kubelet[2771]: E1213 23:07:25.773349 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.773535 kubelet[2771]: E1213 23:07:25.773515 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.773535 kubelet[2771]: W1213 23:07:25.773526 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.773611 kubelet[2771]: E1213 23:07:25.773544 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.773750 kubelet[2771]: E1213 23:07:25.773731 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.773796 kubelet[2771]: W1213 23:07:25.773744 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.773796 kubelet[2771]: E1213 23:07:25.773766 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.774050 kubelet[2771]: E1213 23:07:25.774017 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.774050 kubelet[2771]: W1213 23:07:25.774032 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.774050 kubelet[2771]: E1213 23:07:25.774041 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.774266 kubelet[2771]: E1213 23:07:25.774251 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.774289 kubelet[2771]: W1213 23:07:25.774265 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.774325 kubelet[2771]: E1213 23:07:25.774288 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.774655 kubelet[2771]: E1213 23:07:25.774624 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.774655 kubelet[2771]: W1213 23:07:25.774641 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.774655 kubelet[2771]: E1213 23:07:25.774653 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.774826 kubelet[2771]: E1213 23:07:25.774814 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.774849 kubelet[2771]: W1213 23:07:25.774827 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.774849 kubelet[2771]: E1213 23:07:25.774837 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.775260 kubelet[2771]: E1213 23:07:25.775245 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.775294 kubelet[2771]: W1213 23:07:25.775261 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.775294 kubelet[2771]: E1213 23:07:25.775272 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.777337 kubelet[2771]: E1213 23:07:25.777309 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.777337 kubelet[2771]: W1213 23:07:25.777326 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.777395 kubelet[2771]: E1213 23:07:25.777338 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.777546 kubelet[2771]: E1213 23:07:25.777532 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.777546 kubelet[2771]: W1213 23:07:25.777544 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.777595 kubelet[2771]: E1213 23:07:25.777552 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.777871 kubelet[2771]: E1213 23:07:25.777855 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.777871 kubelet[2771]: W1213 23:07:25.777870 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.777919 kubelet[2771]: E1213 23:07:25.777881 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.778170 kubelet[2771]: E1213 23:07:25.778156 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.778196 kubelet[2771]: W1213 23:07:25.778169 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.778196 kubelet[2771]: E1213 23:07:25.778179 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.778356 kubelet[2771]: E1213 23:07:25.778344 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.778356 kubelet[2771]: W1213 23:07:25.778354 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.778403 kubelet[2771]: E1213 23:07:25.778363 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.778531 kubelet[2771]: E1213 23:07:25.778521 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.778556 kubelet[2771]: W1213 23:07:25.778531 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.778556 kubelet[2771]: E1213 23:07:25.778539 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.778698 kubelet[2771]: E1213 23:07:25.778688 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.778719 kubelet[2771]: W1213 23:07:25.778698 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.778719 kubelet[2771]: E1213 23:07:25.778706 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.778900 kubelet[2771]: E1213 23:07:25.778886 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.779874 kubelet[2771]: W1213 23:07:25.778901 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.779874 kubelet[2771]: E1213 23:07:25.779039 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.779874 kubelet[2771]: E1213 23:07:25.779454 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.779874 kubelet[2771]: W1213 23:07:25.779487 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.779874 kubelet[2771]: E1213 23:07:25.779499 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.779874 kubelet[2771]: E1213 23:07:25.779709 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.779874 kubelet[2771]: W1213 23:07:25.779718 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.779874 kubelet[2771]: E1213 23:07:25.779726 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.780072 kubelet[2771]: E1213 23:07:25.779971 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.780072 kubelet[2771]: W1213 23:07:25.779981 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.780072 kubelet[2771]: E1213 23:07:25.779990 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:25.794777 kubelet[2771]: E1213 23:07:25.794713 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:25.794777 kubelet[2771]: W1213 23:07:25.794730 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:25.794777 kubelet[2771]: E1213 23:07:25.794744 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:26.203000 audit[3406]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:26.203000 audit[3406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffdbdda80 a2=0 a3=1 items=0 ppid=2912 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:26.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:26.210000 audit[3406]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:26.210000 audit[3406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffdbdda80 a2=0 a3=1 items=0 ppid=2912 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:26.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:26.626314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107088319.mount: Deactivated successfully. Dec 13 23:07:27.127768 containerd[1603]: time="2025-12-13T23:07:27.127697198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:27.128599 containerd[1603]: time="2025-12-13T23:07:27.128535822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 13 23:07:27.129480 containerd[1603]: time="2025-12-13T23:07:27.129428221Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:27.133168 containerd[1603]: time="2025-12-13T23:07:27.133116446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:27.134804 containerd[1603]: time="2025-12-13T23:07:27.134434878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.475261186s" Dec 13 23:07:27.134804 containerd[1603]: time="2025-12-13T23:07:27.134468247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 13 23:07:27.137387 containerd[1603]: time="2025-12-13T23:07:27.137197015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 13 23:07:27.182614 containerd[1603]: time="2025-12-13T23:07:27.182571533Z" level=info msg="CreateContainer within sandbox \"74e0f8bb41be9fefceb48bfed37889e1e952235d4b941fad154e799328d90d94\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 23:07:27.189916 containerd[1603]: time="2025-12-13T23:07:27.189017774Z" level=info msg="Container 409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:27.195469 containerd[1603]: time="2025-12-13T23:07:27.195431327Z" level=info msg="CreateContainer within sandbox \"74e0f8bb41be9fefceb48bfed37889e1e952235d4b941fad154e799328d90d94\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91\"" Dec 13 23:07:27.197199 containerd[1603]: time="2025-12-13T23:07:27.197161349Z" level=info msg="StartContainer for \"409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91\"" Dec 13 23:07:27.198784 containerd[1603]: time="2025-12-13T23:07:27.198736169Z" level=info msg="connecting to shim 409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91" address="unix:///run/containerd/s/3a6353a1fb6ba8f246c1343c635ecbb391ef6e8316fb5cb6f033bd30eb573dd5" protocol=ttrpc version=3 Dec 13 23:07:27.229411 systemd[1]: Started cri-containerd-409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91.scope - libcontainer container 409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91. Dec 13 23:07:27.241000 audit: BPF prog-id=165 op=LOAD Dec 13 23:07:27.242000 audit: BPF prog-id=166 op=LOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.242000 audit: BPF prog-id=166 op=UNLOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.242000 audit: BPF prog-id=167 op=LOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.242000 audit: BPF prog-id=168 op=LOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.242000 audit: BPF prog-id=168 op=UNLOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.242000 audit: BPF prog-id=167 op=UNLOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.242000 audit: BPF prog-id=169 op=LOAD Dec 13 23:07:27.242000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3224 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:27.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396366356334393562313335393837376638303366333961626130 Dec 13 23:07:27.268788 containerd[1603]: time="2025-12-13T23:07:27.268738943Z" level=info msg="StartContainer for \"409cf5c495b1359877f803f39aba0b3eb2f81eaa656e5412a7f6c36ea5486f91\" returns successfully" Dec 13 23:07:27.736508 kubelet[2771]: E1213 23:07:27.736091 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:27.802273 kubelet[2771]: E1213 23:07:27.802216 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:27.826978 kubelet[2771]: I1213 23:07:27.826902 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-694f9b6c78-gclm8" podStartSLOduration=1.348313187 podStartE2EDuration="2.826887836s" podCreationTimestamp="2025-12-13 23:07:25 +0000 UTC" firstStartedPulling="2025-12-13 23:07:25.658468925 +0000 UTC m=+21.041677127" lastFinishedPulling="2025-12-13 23:07:27.137043534 +0000 UTC m=+22.520251776" observedRunningTime="2025-12-13 23:07:27.826331847 +0000 UTC m=+23.209540089" watchObservedRunningTime="2025-12-13 23:07:27.826887836 +0000 UTC m=+23.210096078" Dec 13 23:07:27.859706 kubelet[2771]: E1213 23:07:27.859574 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.859706 kubelet[2771]: W1213 23:07:27.859602 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.859706 kubelet[2771]: E1213 23:07:27.859623 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.859960 kubelet[2771]: E1213 23:07:27.859946 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.860053 kubelet[2771]: W1213 23:07:27.860003 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.860165 kubelet[2771]: E1213 23:07:27.860150 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.860523 kubelet[2771]: E1213 23:07:27.860409 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.860523 kubelet[2771]: W1213 23:07:27.860423 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.860523 kubelet[2771]: E1213 23:07:27.860434 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.860696 kubelet[2771]: E1213 23:07:27.860683 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.860751 kubelet[2771]: W1213 23:07:27.860741 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.860815 kubelet[2771]: E1213 23:07:27.860805 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.861094 kubelet[2771]: E1213 23:07:27.861076 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.861210 kubelet[2771]: W1213 23:07:27.861195 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.861266 kubelet[2771]: E1213 23:07:27.861256 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.861573 kubelet[2771]: E1213 23:07:27.861474 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.861573 kubelet[2771]: W1213 23:07:27.861485 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.861573 kubelet[2771]: E1213 23:07:27.861496 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.861736 kubelet[2771]: E1213 23:07:27.861723 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.861791 kubelet[2771]: W1213 23:07:27.861780 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.861939 kubelet[2771]: E1213 23:07:27.861833 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.862095 kubelet[2771]: E1213 23:07:27.862080 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.862180 kubelet[2771]: W1213 23:07:27.862169 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.862245 kubelet[2771]: E1213 23:07:27.862233 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.862560 kubelet[2771]: E1213 23:07:27.862452 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.862560 kubelet[2771]: W1213 23:07:27.862463 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.862560 kubelet[2771]: E1213 23:07:27.862473 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.862711 kubelet[2771]: E1213 23:07:27.862698 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.862760 kubelet[2771]: W1213 23:07:27.862750 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.862814 kubelet[2771]: E1213 23:07:27.862804 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.863030 kubelet[2771]: E1213 23:07:27.863016 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.863098 kubelet[2771]: W1213 23:07:27.863087 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.863173 kubelet[2771]: E1213 23:07:27.863161 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.863512 kubelet[2771]: E1213 23:07:27.863396 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.863512 kubelet[2771]: W1213 23:07:27.863408 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.863512 kubelet[2771]: E1213 23:07:27.863418 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.863676 kubelet[2771]: E1213 23:07:27.863662 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.863731 kubelet[2771]: W1213 23:07:27.863720 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.863788 kubelet[2771]: E1213 23:07:27.863779 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.864000 kubelet[2771]: E1213 23:07:27.863987 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.864151 kubelet[2771]: W1213 23:07:27.864054 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.864151 kubelet[2771]: E1213 23:07:27.864071 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.864586 kubelet[2771]: E1213 23:07:27.864548 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.864586 kubelet[2771]: W1213 23:07:27.864566 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.864586 kubelet[2771]: E1213 23:07:27.864579 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.888773 kubelet[2771]: E1213 23:07:27.888731 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.888773 kubelet[2771]: W1213 23:07:27.888755 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.888773 kubelet[2771]: E1213 23:07:27.888774 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.888993 kubelet[2771]: E1213 23:07:27.888973 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.888993 kubelet[2771]: W1213 23:07:27.888985 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.888993 kubelet[2771]: E1213 23:07:27.888994 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.889285 kubelet[2771]: E1213 23:07:27.889245 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.889285 kubelet[2771]: W1213 23:07:27.889267 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.889285 kubelet[2771]: E1213 23:07:27.889281 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.889481 kubelet[2771]: E1213 23:07:27.889466 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.889481 kubelet[2771]: W1213 23:07:27.889477 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.889531 kubelet[2771]: E1213 23:07:27.889486 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.889663 kubelet[2771]: E1213 23:07:27.889639 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.889663 kubelet[2771]: W1213 23:07:27.889651 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.889663 kubelet[2771]: E1213 23:07:27.889659 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.889847 kubelet[2771]: E1213 23:07:27.889835 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.889847 kubelet[2771]: W1213 23:07:27.889845 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.889905 kubelet[2771]: E1213 23:07:27.889854 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.890122 kubelet[2771]: E1213 23:07:27.890093 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.890122 kubelet[2771]: W1213 23:07:27.890115 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.890187 kubelet[2771]: E1213 23:07:27.890126 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.890379 kubelet[2771]: E1213 23:07:27.890353 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.890379 kubelet[2771]: W1213 23:07:27.890366 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.890379 kubelet[2771]: E1213 23:07:27.890375 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.890543 kubelet[2771]: E1213 23:07:27.890532 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.890543 kubelet[2771]: W1213 23:07:27.890542 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.890588 kubelet[2771]: E1213 23:07:27.890550 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.890700 kubelet[2771]: E1213 23:07:27.890691 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.890725 kubelet[2771]: W1213 23:07:27.890701 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.890725 kubelet[2771]: E1213 23:07:27.890708 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.890873 kubelet[2771]: E1213 23:07:27.890862 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.890907 kubelet[2771]: W1213 23:07:27.890872 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.890907 kubelet[2771]: E1213 23:07:27.890890 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.891062 kubelet[2771]: E1213 23:07:27.891050 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.891086 kubelet[2771]: W1213 23:07:27.891062 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.891086 kubelet[2771]: E1213 23:07:27.891070 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.891257 kubelet[2771]: E1213 23:07:27.891245 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.891281 kubelet[2771]: W1213 23:07:27.891258 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.891281 kubelet[2771]: E1213 23:07:27.891267 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.891525 kubelet[2771]: E1213 23:07:27.891508 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.891549 kubelet[2771]: W1213 23:07:27.891526 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.891549 kubelet[2771]: E1213 23:07:27.891539 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.891682 kubelet[2771]: E1213 23:07:27.891672 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.891703 kubelet[2771]: W1213 23:07:27.891682 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.891703 kubelet[2771]: E1213 23:07:27.891689 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.891887 kubelet[2771]: E1213 23:07:27.891869 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.891909 kubelet[2771]: W1213 23:07:27.891888 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.891909 kubelet[2771]: E1213 23:07:27.891899 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.892143 kubelet[2771]: E1213 23:07:27.892128 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.892166 kubelet[2771]: W1213 23:07:27.892144 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.892166 kubelet[2771]: E1213 23:07:27.892156 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:27.892353 kubelet[2771]: E1213 23:07:27.892340 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 23:07:27.892378 kubelet[2771]: W1213 23:07:27.892353 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 23:07:27.892378 kubelet[2771]: E1213 23:07:27.892362 2771 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 23:07:28.310364 containerd[1603]: time="2025-12-13T23:07:28.310281356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:28.311144 containerd[1603]: time="2025-12-13T23:07:28.311066236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=741" Dec 13 23:07:28.311829 containerd[1603]: time="2025-12-13T23:07:28.311795222Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:28.313744 containerd[1603]: time="2025-12-13T23:07:28.313697507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:28.314369 containerd[1603]: time="2025-12-13T23:07:28.314332229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.177102966s" Dec 13 23:07:28.314408 containerd[1603]: time="2025-12-13T23:07:28.314364837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 13 23:07:28.318510 containerd[1603]: time="2025-12-13T23:07:28.318483727Z" level=info msg="CreateContainer within sandbox \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 23:07:28.326432 containerd[1603]: time="2025-12-13T23:07:28.326389943Z" level=info msg="Container c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:28.336306 containerd[1603]: time="2025-12-13T23:07:28.336256858Z" level=info msg="CreateContainer within sandbox \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f\"" Dec 13 23:07:28.338225 containerd[1603]: time="2025-12-13T23:07:28.338190511Z" level=info msg="StartContainer for \"c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f\"" Dec 13 23:07:28.339650 containerd[1603]: time="2025-12-13T23:07:28.339625516Z" level=info msg="connecting to shim c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f" address="unix:///run/containerd/s/531ca2ca35da9d0809e0a5134bd13b0826a4f8ad31fcd74d2e17deb15e7b6610" protocol=ttrpc version=3 Dec 13 23:07:28.361392 systemd[1]: Started cri-containerd-c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f.scope - libcontainer container c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f. Dec 13 23:07:28.425000 audit: BPF prog-id=170 op=LOAD Dec 13 23:07:28.425000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3340 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:28.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336663933383362356463336230613131613733353663386566303966 Dec 13 23:07:28.425000 audit: BPF prog-id=171 op=LOAD Dec 13 23:07:28.425000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3340 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:28.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336663933383362356463336230613131613733353663386566303966 Dec 13 23:07:28.425000 audit: BPF prog-id=171 op=UNLOAD Dec 13 23:07:28.425000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:28.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336663933383362356463336230613131613733353663386566303966 Dec 13 23:07:28.425000 audit: BPF prog-id=170 op=UNLOAD Dec 13 23:07:28.425000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:28.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336663933383362356463336230613131613733353663386566303966 Dec 13 23:07:28.425000 audit: BPF prog-id=172 op=LOAD Dec 13 23:07:28.425000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3340 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:28.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336663933383362356463336230613131613733353663386566303966 Dec 13 23:07:28.445866 containerd[1603]: time="2025-12-13T23:07:28.445790739Z" level=info msg="StartContainer for \"c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f\" returns successfully" Dec 13 23:07:28.458730 systemd[1]: cri-containerd-c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f.scope: Deactivated successfully. Dec 13 23:07:28.459038 systemd[1]: cri-containerd-c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f.scope: Consumed 31ms CPU time, 6.5M memory peak, 4.5M written to disk. Dec 13 23:07:28.463150 containerd[1603]: time="2025-12-13T23:07:28.463077385Z" level=info msg="received container exit event container_id:\"c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f\" id:\"c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f\" pid:3508 exited_at:{seconds:1765667248 nanos:462600424}" Dec 13 23:07:28.465000 audit: BPF prog-id=172 op=UNLOAD Dec 13 23:07:28.488542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6f9383b5dc3b0a11a7356c8ef09f0e53868c27698de80a70b1537af5d6b426f-rootfs.mount: Deactivated successfully. Dec 13 23:07:28.805481 kubelet[2771]: I1213 23:07:28.805448 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 23:07:28.805882 kubelet[2771]: E1213 23:07:28.805734 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:28.806623 kubelet[2771]: E1213 23:07:28.806602 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:28.807823 containerd[1603]: time="2025-12-13T23:07:28.807644338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 13 23:07:29.732617 kubelet[2771]: E1213 23:07:29.732557 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:30.962233 containerd[1603]: time="2025-12-13T23:07:30.962187003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:30.963121 containerd[1603]: time="2025-12-13T23:07:30.962673676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 13 23:07:30.963545 containerd[1603]: time="2025-12-13T23:07:30.963514072Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:30.965505 containerd[1603]: time="2025-12-13T23:07:30.965478569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:30.966002 containerd[1603]: time="2025-12-13T23:07:30.965982447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.158298219s" Dec 13 23:07:30.966093 containerd[1603]: time="2025-12-13T23:07:30.966077789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 13 23:07:30.971753 containerd[1603]: time="2025-12-13T23:07:30.971714982Z" level=info msg="CreateContainer within sandbox \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 23:07:30.983631 containerd[1603]: time="2025-12-13T23:07:30.983410945Z" level=info msg="Container 3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:30.983823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151134883.mount: Deactivated successfully. Dec 13 23:07:30.991575 containerd[1603]: time="2025-12-13T23:07:30.991528875Z" level=info msg="CreateContainer within sandbox \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db\"" Dec 13 23:07:30.992305 containerd[1603]: time="2025-12-13T23:07:30.992269208Z" level=info msg="StartContainer for \"3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db\"" Dec 13 23:07:30.994483 containerd[1603]: time="2025-12-13T23:07:30.994449075Z" level=info msg="connecting to shim 3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db" address="unix:///run/containerd/s/531ca2ca35da9d0809e0a5134bd13b0826a4f8ad31fcd74d2e17deb15e7b6610" protocol=ttrpc version=3 Dec 13 23:07:31.017348 systemd[1]: Started cri-containerd-3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db.scope - libcontainer container 3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db. Dec 13 23:07:31.064517 kernel: kauditd_printk_skb: 90 callbacks suppressed Dec 13 23:07:31.064622 kernel: audit: type=1334 audit(1765667251.061:569): prog-id=173 op=LOAD Dec 13 23:07:31.064669 kernel: audit: type=1300 audit(1765667251.061:569): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.061000 audit: BPF prog-id=173 op=LOAD Dec 13 23:07:31.061000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.071146 kernel: audit: type=1327 audit(1765667251.061:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.062000 audit: BPF prog-id=174 op=LOAD Dec 13 23:07:31.072133 kernel: audit: type=1334 audit(1765667251.062:570): prog-id=174 op=LOAD Dec 13 23:07:31.072162 kernel: audit: type=1300 audit(1765667251.062:570): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.062000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.078770 kernel: audit: type=1327 audit(1765667251.062:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.063000 audit: BPF prog-id=174 op=UNLOAD Dec 13 23:07:31.079721 kernel: audit: type=1334 audit(1765667251.063:571): prog-id=174 op=UNLOAD Dec 13 23:07:31.063000 audit[3557]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.083361 kernel: audit: type=1300 audit(1765667251.063:571): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.083448 kernel: audit: type=1327 audit(1765667251.063:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.063000 audit: BPF prog-id=173 op=UNLOAD Dec 13 23:07:31.087137 kernel: audit: type=1334 audit(1765667251.063:572): prog-id=173 op=UNLOAD Dec 13 23:07:31.063000 audit[3557]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.063000 audit: BPF prog-id=175 op=LOAD Dec 13 23:07:31.063000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3340 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:31.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333393062373833303738323033316266346135656239333139396166 Dec 13 23:07:31.094973 containerd[1603]: time="2025-12-13T23:07:31.094871681Z" level=info msg="StartContainer for \"3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db\" returns successfully" Dec 13 23:07:31.732744 kubelet[2771]: E1213 23:07:31.732702 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:31.826746 kubelet[2771]: E1213 23:07:31.826699 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:31.839902 containerd[1603]: time="2025-12-13T23:07:31.839848140Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 23:07:31.844735 systemd[1]: cri-containerd-3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db.scope: Deactivated successfully. Dec 13 23:07:31.845154 systemd[1]: cri-containerd-3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db.scope: Consumed 462ms CPU time, 177.5M memory peak, 2.7M read from disk, 165.9M written to disk. Dec 13 23:07:31.848766 containerd[1603]: time="2025-12-13T23:07:31.848732600Z" level=info msg="received container exit event container_id:\"3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db\" id:\"3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db\" pid:3570 exited_at:{seconds:1765667251 nanos:848455779}" Dec 13 23:07:31.849000 audit: BPF prog-id=175 op=UNLOAD Dec 13 23:07:31.867329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3390b7830782031bf4a5eb93199af966ce2329de001cd5a177aba825c4d9a1db-rootfs.mount: Deactivated successfully. Dec 13 23:07:31.932472 kubelet[2771]: I1213 23:07:31.932441 2771 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 13 23:07:31.975381 systemd[1]: Created slice kubepods-besteffort-pod485d924b_b146_4fe9_a032_61945d861754.slice - libcontainer container kubepods-besteffort-pod485d924b_b146_4fe9_a032_61945d861754.slice. Dec 13 23:07:32.008098 systemd[1]: Created slice kubepods-burstable-podd2d46229_e483_4a67_a12f_24b1342fa667.slice - libcontainer container kubepods-burstable-podd2d46229_e483_4a67_a12f_24b1342fa667.slice. Dec 13 23:07:32.016686 kubelet[2771]: I1213 23:07:32.016651 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5j8\" (UniqueName: \"kubernetes.io/projected/485d924b-b146-4fe9-a032-61945d861754-kube-api-access-st5j8\") pod \"calico-apiserver-567b4f6b5-7hz8c\" (UID: \"485d924b-b146-4fe9-a032-61945d861754\") " pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" Dec 13 23:07:32.018235 kubelet[2771]: I1213 23:07:32.018210 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/485d924b-b146-4fe9-a032-61945d861754-calico-apiserver-certs\") pod \"calico-apiserver-567b4f6b5-7hz8c\" (UID: \"485d924b-b146-4fe9-a032-61945d861754\") " pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" Dec 13 23:07:32.020129 systemd[1]: Created slice kubepods-burstable-podc0c4f348_68ea_46b0_9f9e_e194751ec60f.slice - libcontainer container kubepods-burstable-podc0c4f348_68ea_46b0_9f9e_e194751ec60f.slice. Dec 13 23:07:32.030410 systemd[1]: Created slice kubepods-besteffort-pode9f6be97_9073_49f4_b46f_97add2dc7d48.slice - libcontainer container kubepods-besteffort-pode9f6be97_9073_49f4_b46f_97add2dc7d48.slice. Dec 13 23:07:32.050591 systemd[1]: Created slice kubepods-besteffort-pod2b638ae5_abb0_4b0a_be9b_2c5db35a39f8.slice - libcontainer container kubepods-besteffort-pod2b638ae5_abb0_4b0a_be9b_2c5db35a39f8.slice. Dec 13 23:07:32.066600 systemd[1]: Created slice kubepods-besteffort-pod749ee60f_8b5b_4a24_9e66_ab82e119fd2c.slice - libcontainer container kubepods-besteffort-pod749ee60f_8b5b_4a24_9e66_ab82e119fd2c.slice. Dec 13 23:07:32.074300 systemd[1]: Created slice kubepods-besteffort-poda7cced14_c53b_44b0_9445_694ed7cd5577.slice - libcontainer container kubepods-besteffort-poda7cced14_c53b_44b0_9445_694ed7cd5577.slice. Dec 13 23:07:32.119964 kubelet[2771]: I1213 23:07:32.119403 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6pl\" (UniqueName: \"kubernetes.io/projected/c0c4f348-68ea-46b0-9f9e-e194751ec60f-kube-api-access-bz6pl\") pod \"coredns-674b8bbfcf-7hjk8\" (UID: \"c0c4f348-68ea-46b0-9f9e-e194751ec60f\") " pod="kube-system/coredns-674b8bbfcf-7hjk8" Dec 13 23:07:32.119964 kubelet[2771]: I1213 23:07:32.119452 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-ca-bundle\") pod \"whisker-7897fb9d66-z8zxc\" (UID: \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\") " pod="calico-system/whisker-7897fb9d66-z8zxc" Dec 13 23:07:32.119964 kubelet[2771]: I1213 23:07:32.119492 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749ee60f-8b5b-4a24-9e66-ab82e119fd2c-goldmane-ca-bundle\") pod \"goldmane-666569f655-fcgnt\" (UID: \"749ee60f-8b5b-4a24-9e66-ab82e119fd2c\") " pod="calico-system/goldmane-666569f655-fcgnt" Dec 13 23:07:32.119964 kubelet[2771]: I1213 23:07:32.119511 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9f6be97-9073-49f4-b46f-97add2dc7d48-calico-apiserver-certs\") pod \"calico-apiserver-567b4f6b5-cd8w4\" (UID: \"e9f6be97-9073-49f4-b46f-97add2dc7d48\") " pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" Dec 13 23:07:32.119964 kubelet[2771]: I1213 23:07:32.119527 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdh5d\" (UniqueName: \"kubernetes.io/projected/e9f6be97-9073-49f4-b46f-97add2dc7d48-kube-api-access-bdh5d\") pod \"calico-apiserver-567b4f6b5-cd8w4\" (UID: \"e9f6be97-9073-49f4-b46f-97add2dc7d48\") " pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" Dec 13 23:07:32.120222 kubelet[2771]: I1213 23:07:32.119544 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2d46229-e483-4a67-a12f-24b1342fa667-config-volume\") pod \"coredns-674b8bbfcf-pvkzj\" (UID: \"d2d46229-e483-4a67-a12f-24b1342fa667\") " pod="kube-system/coredns-674b8bbfcf-pvkzj" Dec 13 23:07:32.120222 kubelet[2771]: I1213 23:07:32.119564 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/749ee60f-8b5b-4a24-9e66-ab82e119fd2c-config\") pod \"goldmane-666569f655-fcgnt\" (UID: \"749ee60f-8b5b-4a24-9e66-ab82e119fd2c\") " pod="calico-system/goldmane-666569f655-fcgnt" Dec 13 23:07:32.120222 kubelet[2771]: I1213 23:07:32.119579 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zbm\" (UniqueName: \"kubernetes.io/projected/d2d46229-e483-4a67-a12f-24b1342fa667-kube-api-access-b7zbm\") pod \"coredns-674b8bbfcf-pvkzj\" (UID: \"d2d46229-e483-4a67-a12f-24b1342fa667\") " pod="kube-system/coredns-674b8bbfcf-pvkzj" Dec 13 23:07:32.120222 kubelet[2771]: I1213 23:07:32.119596 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/749ee60f-8b5b-4a24-9e66-ab82e119fd2c-goldmane-key-pair\") pod \"goldmane-666569f655-fcgnt\" (UID: \"749ee60f-8b5b-4a24-9e66-ab82e119fd2c\") " pod="calico-system/goldmane-666569f655-fcgnt" Dec 13 23:07:32.120222 kubelet[2771]: I1213 23:07:32.119611 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0c4f348-68ea-46b0-9f9e-e194751ec60f-config-volume\") pod \"coredns-674b8bbfcf-7hjk8\" (UID: \"c0c4f348-68ea-46b0-9f9e-e194751ec60f\") " pod="kube-system/coredns-674b8bbfcf-7hjk8" Dec 13 23:07:32.120328 kubelet[2771]: I1213 23:07:32.119628 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r242p\" (UniqueName: \"kubernetes.io/projected/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-kube-api-access-r242p\") pod \"whisker-7897fb9d66-z8zxc\" (UID: \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\") " pod="calico-system/whisker-7897fb9d66-z8zxc" Dec 13 23:07:32.120328 kubelet[2771]: I1213 23:07:32.119644 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm69d\" (UniqueName: \"kubernetes.io/projected/749ee60f-8b5b-4a24-9e66-ab82e119fd2c-kube-api-access-tm69d\") pod \"goldmane-666569f655-fcgnt\" (UID: \"749ee60f-8b5b-4a24-9e66-ab82e119fd2c\") " pod="calico-system/goldmane-666569f655-fcgnt" Dec 13 23:07:32.120328 kubelet[2771]: I1213 23:07:32.119660 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vrl\" (UniqueName: \"kubernetes.io/projected/a7cced14-c53b-44b0-9445-694ed7cd5577-kube-api-access-m8vrl\") pod \"calico-kube-controllers-69fcdcb775-d2n7t\" (UID: \"a7cced14-c53b-44b0-9445-694ed7cd5577\") " pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" Dec 13 23:07:32.120328 kubelet[2771]: I1213 23:07:32.119684 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-backend-key-pair\") pod \"whisker-7897fb9d66-z8zxc\" (UID: \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\") " pod="calico-system/whisker-7897fb9d66-z8zxc" Dec 13 23:07:32.120328 kubelet[2771]: I1213 23:07:32.119700 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7cced14-c53b-44b0-9445-694ed7cd5577-tigera-ca-bundle\") pod \"calico-kube-controllers-69fcdcb775-d2n7t\" (UID: \"a7cced14-c53b-44b0-9445-694ed7cd5577\") " pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" Dec 13 23:07:32.283490 containerd[1603]: time="2025-12-13T23:07:32.283371373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-7hz8c,Uid:485d924b-b146-4fe9-a032-61945d861754,Namespace:calico-apiserver,Attempt:0,}" Dec 13 23:07:32.314949 kubelet[2771]: E1213 23:07:32.314840 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:32.316418 containerd[1603]: time="2025-12-13T23:07:32.316383620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvkzj,Uid:d2d46229-e483-4a67-a12f-24b1342fa667,Namespace:kube-system,Attempt:0,}" Dec 13 23:07:32.325960 kubelet[2771]: E1213 23:07:32.325511 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:32.328126 containerd[1603]: time="2025-12-13T23:07:32.326678418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hjk8,Uid:c0c4f348-68ea-46b0-9f9e-e194751ec60f,Namespace:kube-system,Attempt:0,}" Dec 13 23:07:32.335115 containerd[1603]: time="2025-12-13T23:07:32.335034561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-cd8w4,Uid:e9f6be97-9073-49f4-b46f-97add2dc7d48,Namespace:calico-apiserver,Attempt:0,}" Dec 13 23:07:32.363376 containerd[1603]: time="2025-12-13T23:07:32.363333803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7897fb9d66-z8zxc,Uid:2b638ae5-abb0-4b0a-be9b-2c5db35a39f8,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:32.378426 containerd[1603]: time="2025-12-13T23:07:32.378375333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fcdcb775-d2n7t,Uid:a7cced14-c53b-44b0-9445-694ed7cd5577,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:32.381341 containerd[1603]: time="2025-12-13T23:07:32.381300878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fcgnt,Uid:749ee60f-8b5b-4a24-9e66-ab82e119fd2c,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:32.392373 containerd[1603]: time="2025-12-13T23:07:32.392315269Z" level=error msg="Failed to destroy network for sandbox \"e93c9869927634e3e2a5f947168bd9e4124cecc74338663cc6498344861ed45d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.395318 containerd[1603]: time="2025-12-13T23:07:32.395268060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-7hz8c,Uid:485d924b-b146-4fe9-a032-61945d861754,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93c9869927634e3e2a5f947168bd9e4124cecc74338663cc6498344861ed45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.401367 kubelet[2771]: E1213 23:07:32.401303 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93c9869927634e3e2a5f947168bd9e4124cecc74338663cc6498344861ed45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.401486 kubelet[2771]: E1213 23:07:32.401402 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93c9869927634e3e2a5f947168bd9e4124cecc74338663cc6498344861ed45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" Dec 13 23:07:32.401486 kubelet[2771]: E1213 23:07:32.401425 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93c9869927634e3e2a5f947168bd9e4124cecc74338663cc6498344861ed45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" Dec 13 23:07:32.401536 kubelet[2771]: E1213 23:07:32.401480 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567b4f6b5-7hz8c_calico-apiserver(485d924b-b146-4fe9-a032-61945d861754)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567b4f6b5-7hz8c_calico-apiserver(485d924b-b146-4fe9-a032-61945d861754)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e93c9869927634e3e2a5f947168bd9e4124cecc74338663cc6498344861ed45d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:07:32.415640 containerd[1603]: time="2025-12-13T23:07:32.415526264Z" level=error msg="Failed to destroy network for sandbox \"b3fc5f04c9409f10a41269fed9fbd128e88d51e648874b6b7837c6b8915aef8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.417283 containerd[1603]: time="2025-12-13T23:07:32.417233949Z" level=error msg="Failed to destroy network for sandbox \"aaaca09e57190529c002a2d207d20ef7aa8e2e67d00c5cb05731f6fcfa56d9ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.418644 containerd[1603]: time="2025-12-13T23:07:32.418607002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-cd8w4,Uid:e9f6be97-9073-49f4-b46f-97add2dc7d48,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fc5f04c9409f10a41269fed9fbd128e88d51e648874b6b7837c6b8915aef8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.419278 kubelet[2771]: E1213 23:07:32.419065 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fc5f04c9409f10a41269fed9fbd128e88d51e648874b6b7837c6b8915aef8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.419278 kubelet[2771]: E1213 23:07:32.419142 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fc5f04c9409f10a41269fed9fbd128e88d51e648874b6b7837c6b8915aef8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" Dec 13 23:07:32.419278 kubelet[2771]: E1213 23:07:32.419168 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fc5f04c9409f10a41269fed9fbd128e88d51e648874b6b7837c6b8915aef8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" Dec 13 23:07:32.419422 kubelet[2771]: E1213 23:07:32.419238 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567b4f6b5-cd8w4_calico-apiserver(e9f6be97-9073-49f4-b46f-97add2dc7d48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567b4f6b5-cd8w4_calico-apiserver(e9f6be97-9073-49f4-b46f-97add2dc7d48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3fc5f04c9409f10a41269fed9fbd128e88d51e648874b6b7837c6b8915aef8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:07:32.420549 containerd[1603]: time="2025-12-13T23:07:32.420510328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvkzj,Uid:d2d46229-e483-4a67-a12f-24b1342fa667,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaaca09e57190529c002a2d207d20ef7aa8e2e67d00c5cb05731f6fcfa56d9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.420980 containerd[1603]: time="2025-12-13T23:07:32.420947982Z" level=error msg="Failed to destroy network for sandbox \"02365669fc5996cd1d86ccb490d72aed4e1377c206db9184b9b7f6f7a91dd039\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.421172 kubelet[2771]: E1213 23:07:32.421130 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaaca09e57190529c002a2d207d20ef7aa8e2e67d00c5cb05731f6fcfa56d9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.421232 kubelet[2771]: E1213 23:07:32.421218 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaaca09e57190529c002a2d207d20ef7aa8e2e67d00c5cb05731f6fcfa56d9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvkzj" Dec 13 23:07:32.421257 kubelet[2771]: E1213 23:07:32.421237 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaaca09e57190529c002a2d207d20ef7aa8e2e67d00c5cb05731f6fcfa56d9ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvkzj" Dec 13 23:07:32.421318 kubelet[2771]: E1213 23:07:32.421291 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pvkzj_kube-system(d2d46229-e483-4a67-a12f-24b1342fa667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pvkzj_kube-system(d2d46229-e483-4a67-a12f-24b1342fa667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaaca09e57190529c002a2d207d20ef7aa8e2e67d00c5cb05731f6fcfa56d9ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pvkzj" podUID="d2d46229-e483-4a67-a12f-24b1342fa667" Dec 13 23:07:32.424551 containerd[1603]: time="2025-12-13T23:07:32.424498660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hjk8,Uid:c0c4f348-68ea-46b0-9f9e-e194751ec60f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02365669fc5996cd1d86ccb490d72aed4e1377c206db9184b9b7f6f7a91dd039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.426215 kubelet[2771]: E1213 23:07:32.424721 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02365669fc5996cd1d86ccb490d72aed4e1377c206db9184b9b7f6f7a91dd039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.426215 kubelet[2771]: E1213 23:07:32.424792 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02365669fc5996cd1d86ccb490d72aed4e1377c206db9184b9b7f6f7a91dd039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7hjk8" Dec 13 23:07:32.426215 kubelet[2771]: E1213 23:07:32.424812 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02365669fc5996cd1d86ccb490d72aed4e1377c206db9184b9b7f6f7a91dd039\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7hjk8" Dec 13 23:07:32.426324 kubelet[2771]: E1213 23:07:32.424872 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7hjk8_kube-system(c0c4f348-68ea-46b0-9f9e-e194751ec60f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7hjk8_kube-system(c0c4f348-68ea-46b0-9f9e-e194751ec60f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02365669fc5996cd1d86ccb490d72aed4e1377c206db9184b9b7f6f7a91dd039\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7hjk8" podUID="c0c4f348-68ea-46b0-9f9e-e194751ec60f" Dec 13 23:07:32.453880 containerd[1603]: time="2025-12-13T23:07:32.453835242Z" level=error msg="Failed to destroy network for sandbox \"0e41956c36b183966d5fda027acbdad22b7b61afa1a57dd43edc738aba0d8dce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.455905 containerd[1603]: time="2025-12-13T23:07:32.455700320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7897fb9d66-z8zxc,Uid:2b638ae5-abb0-4b0a-be9b-2c5db35a39f8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e41956c36b183966d5fda027acbdad22b7b61afa1a57dd43edc738aba0d8dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.456541 kubelet[2771]: E1213 23:07:32.456503 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e41956c36b183966d5fda027acbdad22b7b61afa1a57dd43edc738aba0d8dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.456623 kubelet[2771]: E1213 23:07:32.456563 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e41956c36b183966d5fda027acbdad22b7b61afa1a57dd43edc738aba0d8dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7897fb9d66-z8zxc" Dec 13 23:07:32.456623 kubelet[2771]: E1213 23:07:32.456589 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e41956c36b183966d5fda027acbdad22b7b61afa1a57dd43edc738aba0d8dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7897fb9d66-z8zxc" Dec 13 23:07:32.456769 kubelet[2771]: E1213 23:07:32.456644 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7897fb9d66-z8zxc_calico-system(2b638ae5-abb0-4b0a-be9b-2c5db35a39f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7897fb9d66-z8zxc_calico-system(2b638ae5-abb0-4b0a-be9b-2c5db35a39f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e41956c36b183966d5fda027acbdad22b7b61afa1a57dd43edc738aba0d8dce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7897fb9d66-z8zxc" podUID="2b638ae5-abb0-4b0a-be9b-2c5db35a39f8" Dec 13 23:07:32.461732 containerd[1603]: time="2025-12-13T23:07:32.461666274Z" level=error msg="Failed to destroy network for sandbox \"342d7801f399b7dfbfa946ba3c5c6cf88917a2855cc79bbf50988468c986f618\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.463644 containerd[1603]: time="2025-12-13T23:07:32.463601247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fcdcb775-d2n7t,Uid:a7cced14-c53b-44b0-9445-694ed7cd5577,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"342d7801f399b7dfbfa946ba3c5c6cf88917a2855cc79bbf50988468c986f618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.464006 kubelet[2771]: E1213 23:07:32.463969 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342d7801f399b7dfbfa946ba3c5c6cf88917a2855cc79bbf50988468c986f618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.464073 kubelet[2771]: E1213 23:07:32.464026 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342d7801f399b7dfbfa946ba3c5c6cf88917a2855cc79bbf50988468c986f618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" Dec 13 23:07:32.464073 kubelet[2771]: E1213 23:07:32.464052 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342d7801f399b7dfbfa946ba3c5c6cf88917a2855cc79bbf50988468c986f618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" Dec 13 23:07:32.464191 kubelet[2771]: E1213 23:07:32.464094 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69fcdcb775-d2n7t_calico-system(a7cced14-c53b-44b0-9445-694ed7cd5577)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69fcdcb775-d2n7t_calico-system(a7cced14-c53b-44b0-9445-694ed7cd5577)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"342d7801f399b7dfbfa946ba3c5c6cf88917a2855cc79bbf50988468c986f618\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:07:32.464631 containerd[1603]: time="2025-12-13T23:07:32.464589578Z" level=error msg="Failed to destroy network for sandbox \"0026f58409479e21bf2c8411733bf0868eae8eb14e3e946af999dd382cb66a07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.467968 containerd[1603]: time="2025-12-13T23:07:32.467899924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fcgnt,Uid:749ee60f-8b5b-4a24-9e66-ab82e119fd2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0026f58409479e21bf2c8411733bf0868eae8eb14e3e946af999dd382cb66a07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.468171 kubelet[2771]: E1213 23:07:32.468142 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0026f58409479e21bf2c8411733bf0868eae8eb14e3e946af999dd382cb66a07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:32.468221 kubelet[2771]: E1213 23:07:32.468204 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0026f58409479e21bf2c8411733bf0868eae8eb14e3e946af999dd382cb66a07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fcgnt" Dec 13 23:07:32.468254 kubelet[2771]: E1213 23:07:32.468226 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0026f58409479e21bf2c8411733bf0868eae8eb14e3e946af999dd382cb66a07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fcgnt" Dec 13 23:07:32.468300 kubelet[2771]: E1213 23:07:32.468277 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fcgnt_calico-system(749ee60f-8b5b-4a24-9e66-ab82e119fd2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fcgnt_calico-system(749ee60f-8b5b-4a24-9e66-ab82e119fd2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0026f58409479e21bf2c8411733bf0868eae8eb14e3e946af999dd382cb66a07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c" Dec 13 23:07:32.832061 kubelet[2771]: E1213 23:07:32.831714 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:32.832784 containerd[1603]: time="2025-12-13T23:07:32.832522281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 13 23:07:33.738046 systemd[1]: Created slice kubepods-besteffort-pod68e4691c_7de1_4668_91bc_eef5c31432bb.slice - libcontainer container kubepods-besteffort-pod68e4691c_7de1_4668_91bc_eef5c31432bb.slice. Dec 13 23:07:33.740493 containerd[1603]: time="2025-12-13T23:07:33.740462794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xf56,Uid:68e4691c-7de1-4668-91bc-eef5c31432bb,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:33.782849 containerd[1603]: time="2025-12-13T23:07:33.782802300Z" level=error msg="Failed to destroy network for sandbox \"f14ff3cd19eace2f81634f9868430ca9a0f713aaa014d47fbbad62f349b6f8ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:33.784711 systemd[1]: run-netns-cni\x2dacc87ba3\x2d60c1\x2d3a8e\x2d1bb1\x2d6ffc333390df.mount: Deactivated successfully. Dec 13 23:07:33.789549 containerd[1603]: time="2025-12-13T23:07:33.789459703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xf56,Uid:68e4691c-7de1-4668-91bc-eef5c31432bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14ff3cd19eace2f81634f9868430ca9a0f713aaa014d47fbbad62f349b6f8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:33.789903 kubelet[2771]: E1213 23:07:33.789829 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14ff3cd19eace2f81634f9868430ca9a0f713aaa014d47fbbad62f349b6f8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 23:07:33.790117 kubelet[2771]: E1213 23:07:33.790049 2771 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14ff3cd19eace2f81634f9868430ca9a0f713aaa014d47fbbad62f349b6f8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:33.790275 kubelet[2771]: E1213 23:07:33.790239 2771 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14ff3cd19eace2f81634f9868430ca9a0f713aaa014d47fbbad62f349b6f8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8xf56" Dec 13 23:07:33.790343 kubelet[2771]: E1213 23:07:33.790320 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f14ff3cd19eace2f81634f9868430ca9a0f713aaa014d47fbbad62f349b6f8ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:35.140772 kubelet[2771]: I1213 23:07:35.140735 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 23:07:35.141409 kubelet[2771]: E1213 23:07:35.141205 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:35.169000 audit[3883]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:35.169000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdea2ff20 a2=0 a3=1 items=0 ppid=2912 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:35.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:35.176000 audit[3883]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:35.176000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdea2ff20 a2=0 a3=1 items=0 ppid=2912 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:35.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:35.844044 kubelet[2771]: E1213 23:07:35.843996 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:36.733059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406162711.mount: Deactivated successfully. Dec 13 23:07:36.976784 containerd[1603]: time="2025-12-13T23:07:36.976715563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:36.977824 containerd[1603]: time="2025-12-13T23:07:36.977768674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 13 23:07:36.979136 containerd[1603]: time="2025-12-13T23:07:36.979062189Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:36.981421 containerd[1603]: time="2025-12-13T23:07:36.981370047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:07:36.982043 containerd[1603]: time="2025-12-13T23:07:36.981816528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.149245357s" Dec 13 23:07:36.982043 containerd[1603]: time="2025-12-13T23:07:36.981851535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 13 23:07:36.997981 containerd[1603]: time="2025-12-13T23:07:36.997868841Z" level=info msg="CreateContainer within sandbox \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 23:07:37.006251 containerd[1603]: time="2025-12-13T23:07:37.006217161Z" level=info msg="Container d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:37.016118 containerd[1603]: time="2025-12-13T23:07:37.016075083Z" level=info msg="CreateContainer within sandbox \"f308f282eae693b6ed68f6eaee8b32c806e16d97b9f3a0856871e28f0a8e8e15\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9\"" Dec 13 23:07:37.016968 containerd[1603]: time="2025-12-13T23:07:37.016931713Z" level=info msg="StartContainer for \"d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9\"" Dec 13 23:07:37.018417 containerd[1603]: time="2025-12-13T23:07:37.018386247Z" level=info msg="connecting to shim d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9" address="unix:///run/containerd/s/531ca2ca35da9d0809e0a5134bd13b0826a4f8ad31fcd74d2e17deb15e7b6610" protocol=ttrpc version=3 Dec 13 23:07:37.044299 systemd[1]: Started cri-containerd-d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9.scope - libcontainer container d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9. Dec 13 23:07:37.120287 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 13 23:07:37.120386 kernel: audit: type=1334 audit(1765667257.118:577): prog-id=176 op=LOAD Dec 13 23:07:37.118000 audit: BPF prog-id=176 op=LOAD Dec 13 23:07:37.118000 audit[3886]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.124172 kernel: audit: type=1300 audit(1765667257.118:577): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.127356 kernel: audit: type=1327 audit(1765667257.118:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.118000 audit: BPF prog-id=177 op=LOAD Dec 13 23:07:37.128906 kernel: audit: type=1334 audit(1765667257.118:578): prog-id=177 op=LOAD Dec 13 23:07:37.128971 kernel: audit: type=1300 audit(1765667257.118:578): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.118000 audit[3886]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.148628 kernel: audit: type=1327 audit(1765667257.118:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.148731 kernel: audit: type=1334 audit(1765667257.119:579): prog-id=177 op=UNLOAD Dec 13 23:07:37.119000 audit: BPF prog-id=177 op=UNLOAD Dec 13 23:07:37.119000 audit[3886]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.153963 kernel: audit: type=1300 audit(1765667257.119:579): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.154140 kernel: audit: type=1327 audit(1765667257.119:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.119000 audit: BPF prog-id=176 op=UNLOAD Dec 13 23:07:37.119000 audit[3886]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.158235 kernel: audit: type=1334 audit(1765667257.119:580): prog-id=176 op=UNLOAD Dec 13 23:07:37.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.120000 audit: BPF prog-id=178 op=LOAD Dec 13 23:07:37.120000 audit[3886]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3340 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:37.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432666264313365376235303761323030613034313631393230393539 Dec 13 23:07:37.170966 containerd[1603]: time="2025-12-13T23:07:37.170814232Z" level=info msg="StartContainer for \"d2fbd13e7b507a200a041619209599f561ea5bee8da6e1a334a7835e60b32fb9\" returns successfully" Dec 13 23:07:37.292619 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 23:07:37.292744 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 23:07:37.459353 kubelet[2771]: I1213 23:07:37.459295 2771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-ca-bundle\") pod \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\" (UID: \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\") " Dec 13 23:07:37.459353 kubelet[2771]: I1213 23:07:37.459353 2771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-backend-key-pair\") pod \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\" (UID: \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\") " Dec 13 23:07:37.459758 kubelet[2771]: I1213 23:07:37.459387 2771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r242p\" (UniqueName: \"kubernetes.io/projected/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-kube-api-access-r242p\") pod \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\" (UID: \"2b638ae5-abb0-4b0a-be9b-2c5db35a39f8\") " Dec 13 23:07:37.469024 kubelet[2771]: I1213 23:07:37.468984 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2b638ae5-abb0-4b0a-be9b-2c5db35a39f8" (UID: "2b638ae5-abb0-4b0a-be9b-2c5db35a39f8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 13 23:07:37.469152 kubelet[2771]: I1213 23:07:37.468983 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2b638ae5-abb0-4b0a-be9b-2c5db35a39f8" (UID: "2b638ae5-abb0-4b0a-be9b-2c5db35a39f8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 13 23:07:37.469152 kubelet[2771]: I1213 23:07:37.469000 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-kube-api-access-r242p" (OuterVolumeSpecName: "kube-api-access-r242p") pod "2b638ae5-abb0-4b0a-be9b-2c5db35a39f8" (UID: "2b638ae5-abb0-4b0a-be9b-2c5db35a39f8"). InnerVolumeSpecName "kube-api-access-r242p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 13 23:07:37.560256 kubelet[2771]: I1213 23:07:37.559907 2771 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 23:07:37.560256 kubelet[2771]: I1213 23:07:37.559940 2771 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 13 23:07:37.560256 kubelet[2771]: I1213 23:07:37.559950 2771 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r242p\" (UniqueName: \"kubernetes.io/projected/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8-kube-api-access-r242p\") on node \"localhost\" DevicePath \"\"" Dec 13 23:07:37.733976 systemd[1]: var-lib-kubelet-pods-2b638ae5\x2dabb0\x2d4b0a\x2dbe9b\x2d2c5db35a39f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr242p.mount: Deactivated successfully. Dec 13 23:07:37.734068 systemd[1]: var-lib-kubelet-pods-2b638ae5\x2dabb0\x2d4b0a\x2dbe9b\x2d2c5db35a39f8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 13 23:07:37.855554 kubelet[2771]: E1213 23:07:37.855016 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:37.864530 systemd[1]: Removed slice kubepods-besteffort-pod2b638ae5_abb0_4b0a_be9b_2c5db35a39f8.slice - libcontainer container kubepods-besteffort-pod2b638ae5_abb0_4b0a_be9b_2c5db35a39f8.slice. Dec 13 23:07:37.876688 kubelet[2771]: I1213 23:07:37.876621 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fhcw6" podStartSLOduration=1.655417082 podStartE2EDuration="12.876602511s" podCreationTimestamp="2025-12-13 23:07:25 +0000 UTC" firstStartedPulling="2025-12-13 23:07:25.761323025 +0000 UTC m=+21.144531267" lastFinishedPulling="2025-12-13 23:07:36.982508454 +0000 UTC m=+32.365716696" observedRunningTime="2025-12-13 23:07:37.875599816 +0000 UTC m=+33.258808058" watchObservedRunningTime="2025-12-13 23:07:37.876602511 +0000 UTC m=+33.259810753" Dec 13 23:07:37.939746 systemd[1]: Created slice kubepods-besteffort-pod22a73c93_9cd2_420c_82af_38c36ee2bfd3.slice - libcontainer container kubepods-besteffort-pod22a73c93_9cd2_420c_82af_38c36ee2bfd3.slice. Dec 13 23:07:37.961710 kubelet[2771]: I1213 23:07:37.961669 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzxj\" (UniqueName: \"kubernetes.io/projected/22a73c93-9cd2-420c-82af-38c36ee2bfd3-kube-api-access-qgzxj\") pod \"whisker-5c669cdcf6-md9gr\" (UID: \"22a73c93-9cd2-420c-82af-38c36ee2bfd3\") " pod="calico-system/whisker-5c669cdcf6-md9gr" Dec 13 23:07:37.961710 kubelet[2771]: I1213 23:07:37.961715 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22a73c93-9cd2-420c-82af-38c36ee2bfd3-whisker-backend-key-pair\") pod \"whisker-5c669cdcf6-md9gr\" (UID: \"22a73c93-9cd2-420c-82af-38c36ee2bfd3\") " pod="calico-system/whisker-5c669cdcf6-md9gr" Dec 13 23:07:37.961863 kubelet[2771]: I1213 23:07:37.961733 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22a73c93-9cd2-420c-82af-38c36ee2bfd3-whisker-ca-bundle\") pod \"whisker-5c669cdcf6-md9gr\" (UID: \"22a73c93-9cd2-420c-82af-38c36ee2bfd3\") " pod="calico-system/whisker-5c669cdcf6-md9gr" Dec 13 23:07:38.248376 containerd[1603]: time="2025-12-13T23:07:38.248268810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c669cdcf6-md9gr,Uid:22a73c93-9cd2-420c-82af-38c36ee2bfd3,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:38.427085 systemd-networkd[1296]: cali26301bcbc74: Link UP Dec 13 23:07:38.427568 systemd-networkd[1296]: cali26301bcbc74: Gained carrier Dec 13 23:07:38.442174 containerd[1603]: 2025-12-13 23:07:38.274 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 23:07:38.442174 containerd[1603]: 2025-12-13 23:07:38.313 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c669cdcf6--md9gr-eth0 whisker-5c669cdcf6- calico-system 22a73c93-9cd2-420c-82af-38c36ee2bfd3 953 0 2025-12-13 23:07:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c669cdcf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c669cdcf6-md9gr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26301bcbc74 [] [] }} ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-" Dec 13 23:07:38.442174 containerd[1603]: 2025-12-13 23:07:38.314 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.442174 containerd[1603]: 2025-12-13 23:07:38.380 [INFO][3966] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" HandleID="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Workload="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.380 [INFO][3966] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" HandleID="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Workload="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000285790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c669cdcf6-md9gr", "timestamp":"2025-12-13 23:07:38.380163768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.380 [INFO][3966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.380 [INFO][3966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.380 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.391 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" host="localhost" Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.397 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.401 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.404 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.406 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:38.442404 containerd[1603]: 2025-12-13 23:07:38.406 [INFO][3966] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" host="localhost" Dec 13 23:07:38.442598 containerd[1603]: 2025-12-13 23:07:38.408 [INFO][3966] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c Dec 13 23:07:38.442598 containerd[1603]: 2025-12-13 23:07:38.413 [INFO][3966] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" host="localhost" Dec 13 23:07:38.442598 containerd[1603]: 2025-12-13 23:07:38.418 [INFO][3966] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" host="localhost" Dec 13 23:07:38.442598 containerd[1603]: 2025-12-13 23:07:38.418 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" host="localhost" Dec 13 23:07:38.442598 containerd[1603]: 2025-12-13 23:07:38.418 [INFO][3966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:38.442598 containerd[1603]: 2025-12-13 23:07:38.418 [INFO][3966] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" HandleID="k8s-pod-network.6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Workload="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.442710 containerd[1603]: 2025-12-13 23:07:38.420 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c669cdcf6--md9gr-eth0", GenerateName:"whisker-5c669cdcf6-", Namespace:"calico-system", SelfLink:"", UID:"22a73c93-9cd2-420c-82af-38c36ee2bfd3", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c669cdcf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c669cdcf6-md9gr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26301bcbc74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:38.442710 containerd[1603]: 2025-12-13 23:07:38.421 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.442786 containerd[1603]: 2025-12-13 23:07:38.421 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26301bcbc74 ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.442786 containerd[1603]: 2025-12-13 23:07:38.427 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.442825 containerd[1603]: 2025-12-13 23:07:38.429 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c669cdcf6--md9gr-eth0", GenerateName:"whisker-5c669cdcf6-", Namespace:"calico-system", SelfLink:"", UID:"22a73c93-9cd2-420c-82af-38c36ee2bfd3", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c669cdcf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c", Pod:"whisker-5c669cdcf6-md9gr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26301bcbc74", MAC:"3a:38:ea:a1:00:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:38.442870 containerd[1603]: 2025-12-13 23:07:38.438 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" Namespace="calico-system" Pod="whisker-5c669cdcf6-md9gr" WorkloadEndpoint="localhost-k8s-whisker--5c669cdcf6--md9gr-eth0" Dec 13 23:07:38.473242 containerd[1603]: time="2025-12-13T23:07:38.472713984Z" level=info msg="connecting to shim 6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c" address="unix:///run/containerd/s/87efda9bcdf8dd0db352d813c91278c20f970e54c57b675faffbf08e774486f7" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:38.495304 systemd[1]: Started cri-containerd-6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c.scope - libcontainer container 6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c. Dec 13 23:07:38.506000 audit: BPF prog-id=179 op=LOAD Dec 13 23:07:38.506000 audit: BPF prog-id=180 op=LOAD Dec 13 23:07:38.506000 audit[4002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.506000 audit: BPF prog-id=180 op=UNLOAD Dec 13 23:07:38.506000 audit[4002]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.506000 audit: BPF prog-id=181 op=LOAD Dec 13 23:07:38.506000 audit[4002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.506000 audit: BPF prog-id=182 op=LOAD Dec 13 23:07:38.506000 audit[4002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.506000 audit: BPF prog-id=182 op=UNLOAD Dec 13 23:07:38.506000 audit[4002]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.507000 audit: BPF prog-id=181 op=UNLOAD Dec 13 23:07:38.507000 audit[4002]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.507000 audit: BPF prog-id=183 op=LOAD Dec 13 23:07:38.507000 audit[4002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396664333137636337363630376438333735363634346165633665 Dec 13 23:07:38.508757 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:38.530312 containerd[1603]: time="2025-12-13T23:07:38.530271271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c669cdcf6-md9gr,Uid:22a73c93-9cd2-420c-82af-38c36ee2bfd3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c9fd317cc76607d83756644aec6e85b8ef349ae9c34b9d12c2977e40c86228c\"" Dec 13 23:07:38.532209 containerd[1603]: time="2025-12-13T23:07:38.532143827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 23:07:38.746439 kubelet[2771]: I1213 23:07:38.746391 2771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b638ae5-abb0-4b0a-be9b-2c5db35a39f8" path="/var/lib/kubelet/pods/2b638ae5-abb0-4b0a-be9b-2c5db35a39f8/volumes" Dec 13 23:07:38.747750 containerd[1603]: time="2025-12-13T23:07:38.747716028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:38.749760 containerd[1603]: time="2025-12-13T23:07:38.749714724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:38.752555 containerd[1603]: time="2025-12-13T23:07:38.752508674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 23:07:38.755175 kubelet[2771]: E1213 23:07:38.755060 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 23:07:38.755274 kubelet[2771]: E1213 23:07:38.755187 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 23:07:38.765852 kubelet[2771]: E1213 23:07:38.765730 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1b4a3e8666244488bf626e9588026acb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgzxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c669cdcf6-md9gr_calico-system(22a73c93-9cd2-420c-82af-38c36ee2bfd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:38.767586 containerd[1603]: time="2025-12-13T23:07:38.767553086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 23:07:38.801000 audit: BPF prog-id=184 op=LOAD Dec 13 23:07:38.801000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3dd8c28 a2=98 a3=ffffd3dd8c18 items=0 ppid=4040 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.801000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 23:07:38.802000 audit: BPF prog-id=184 op=UNLOAD Dec 13 23:07:38.802000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd3dd8bf8 a3=0 items=0 ppid=4040 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.802000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 23:07:38.802000 audit: BPF prog-id=185 op=LOAD Dec 13 23:07:38.802000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3dd8ad8 a2=74 a3=95 items=0 ppid=4040 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.802000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 23:07:38.802000 audit: BPF prog-id=185 op=UNLOAD Dec 13 23:07:38.802000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4040 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.802000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 23:07:38.802000 audit: BPF prog-id=186 op=LOAD Dec 13 23:07:38.802000 audit[4154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3dd8b08 a2=40 a3=ffffd3dd8b38 items=0 ppid=4040 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.802000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 23:07:38.802000 audit: BPF prog-id=186 op=UNLOAD Dec 13 23:07:38.802000 audit[4154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd3dd8b38 items=0 ppid=4040 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.802000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 23:07:38.803000 audit: BPF prog-id=187 op=LOAD Dec 13 23:07:38.803000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe1f22f28 a2=98 a3=ffffe1f22f18 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.803000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.804000 audit: BPF prog-id=187 op=UNLOAD Dec 13 23:07:38.804000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe1f22ef8 a3=0 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.804000 audit: BPF prog-id=188 op=LOAD Dec 13 23:07:38.804000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1f22bb8 a2=74 a3=95 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.804000 audit: BPF prog-id=188 op=UNLOAD Dec 13 23:07:38.804000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.804000 audit: BPF prog-id=189 op=LOAD Dec 13 23:07:38.804000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1f22c18 a2=94 a3=2 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.804000 audit: BPF prog-id=189 op=UNLOAD Dec 13 23:07:38.804000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.859730 kubelet[2771]: E1213 23:07:38.859694 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:38.910000 audit: BPF prog-id=190 op=LOAD Dec 13 23:07:38.910000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1f22bd8 a2=40 a3=ffffe1f22c08 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.910000 audit: BPF prog-id=190 op=UNLOAD Dec 13 23:07:38.910000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe1f22c08 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=191 op=LOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe1f22be8 a2=94 a3=4 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=191 op=UNLOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=192 op=LOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe1f22a28 a2=94 a3=5 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=192 op=UNLOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=193 op=LOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe1f22c58 a2=94 a3=6 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=193 op=UNLOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.920000 audit: BPF prog-id=194 op=LOAD Dec 13 23:07:38.920000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe1f22428 a2=94 a3=83 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.921000 audit: BPF prog-id=195 op=LOAD Dec 13 23:07:38.921000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe1f221e8 a2=94 a3=2 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.921000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.921000 audit: BPF prog-id=195 op=UNLOAD Dec 13 23:07:38.921000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.921000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.921000 audit: BPF prog-id=194 op=UNLOAD Dec 13 23:07:38.921000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=24442620 a3=24435b00 items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.921000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 23:07:38.932000 audit: BPF prog-id=196 op=LOAD Dec 13 23:07:38.932000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefda2ce8 a2=98 a3=ffffefda2cd8 items=0 ppid=4040 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 23:07:38.932000 audit: BPF prog-id=196 op=UNLOAD Dec 13 23:07:38.932000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffefda2cb8 a3=0 items=0 ppid=4040 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 23:07:38.932000 audit: BPF prog-id=197 op=LOAD Dec 13 23:07:38.932000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefda2b98 a2=74 a3=95 items=0 ppid=4040 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 23:07:38.932000 audit: BPF prog-id=197 op=UNLOAD Dec 13 23:07:38.932000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4040 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 23:07:38.932000 audit: BPF prog-id=198 op=LOAD Dec 13 23:07:38.932000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefda2bc8 a2=40 a3=ffffefda2bf8 items=0 ppid=4040 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 23:07:38.932000 audit: BPF prog-id=198 op=UNLOAD Dec 13 23:07:38.932000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffefda2bf8 items=0 ppid=4040 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:38.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 23:07:38.980023 containerd[1603]: time="2025-12-13T23:07:38.979980118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:38.980945 containerd[1603]: time="2025-12-13T23:07:38.980905794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 23:07:38.981031 containerd[1603]: time="2025-12-13T23:07:38.980981327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:38.981161 kubelet[2771]: E1213 23:07:38.981119 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 23:07:38.981207 kubelet[2771]: E1213 23:07:38.981173 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 23:07:38.981335 kubelet[2771]: E1213 23:07:38.981293 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgzxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c669cdcf6-md9gr_calico-system(22a73c93-9cd2-420c-82af-38c36ee2bfd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:38.983002 kubelet[2771]: E1213 23:07:38.982965 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c669cdcf6-md9gr" podUID="22a73c93-9cd2-420c-82af-38c36ee2bfd3" Dec 13 23:07:38.999203 systemd-networkd[1296]: vxlan.calico: Link UP Dec 13 23:07:38.999218 systemd-networkd[1296]: vxlan.calico: Gained carrier Dec 13 23:07:39.013000 audit: BPF prog-id=199 op=LOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8e316e8 a2=98 a3=fffff8e316d8 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=199 op=UNLOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff8e316b8 a3=0 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=200 op=LOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8e313c8 a2=74 a3=95 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=200 op=UNLOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=201 op=LOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8e31428 a2=94 a3=2 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=201 op=UNLOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=202 op=LOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8e312a8 a2=40 a3=fffff8e312d8 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=202 op=UNLOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff8e312d8 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=203 op=LOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8e313f8 a2=94 a3=b7 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.013000 audit: BPF prog-id=203 op=UNLOAD Dec 13 23:07:39.013000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.014000 audit: BPF prog-id=204 op=LOAD Dec 13 23:07:39.014000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8e30aa8 a2=94 a3=2 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.014000 audit: BPF prog-id=204 op=UNLOAD Dec 13 23:07:39.014000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.014000 audit: BPF prog-id=205 op=LOAD Dec 13 23:07:39.014000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8e30c38 a2=94 a3=30 items=0 ppid=4040 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 23:07:39.017000 audit: BPF prog-id=206 op=LOAD Dec 13 23:07:39.017000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2f66f48 a2=98 a3=ffffe2f66f38 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.017000 audit: BPF prog-id=206 op=UNLOAD Dec 13 23:07:39.017000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe2f66f18 a3=0 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.017000 audit: BPF prog-id=207 op=LOAD Dec 13 23:07:39.017000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe2f66bd8 a2=74 a3=95 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.017000 audit: BPF prog-id=207 op=UNLOAD Dec 13 23:07:39.017000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.017000 audit: BPF prog-id=208 op=LOAD Dec 13 23:07:39.017000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe2f66c38 a2=94 a3=2 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.017000 audit: BPF prog-id=208 op=UNLOAD Dec 13 23:07:39.017000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.114000 audit: BPF prog-id=209 op=LOAD Dec 13 23:07:39.114000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe2f66bf8 a2=40 a3=ffffe2f66c28 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.114000 audit: BPF prog-id=209 op=UNLOAD Dec 13 23:07:39.114000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe2f66c28 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=210 op=LOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe2f66c08 a2=94 a3=4 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=210 op=UNLOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=211 op=LOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2f66a48 a2=94 a3=5 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=211 op=UNLOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=212 op=LOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe2f66c78 a2=94 a3=6 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=212 op=UNLOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.124000 audit: BPF prog-id=213 op=LOAD Dec 13 23:07:39.124000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe2f66448 a2=94 a3=83 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.124000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.125000 audit: BPF prog-id=214 op=LOAD Dec 13 23:07:39.125000 audit[4213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe2f66208 a2=94 a3=2 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.125000 audit: BPF prog-id=214 op=UNLOAD Dec 13 23:07:39.125000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.125000 audit: BPF prog-id=213 op=UNLOAD Dec 13 23:07:39.125000 audit[4213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=18ce1620 a3=18cd4b00 items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 23:07:39.135000 audit: BPF prog-id=205 op=UNLOAD Dec 13 23:07:39.135000 audit[4040]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000142ec0 a2=0 a3=0 items=0 ppid=4033 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.135000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 13 23:07:39.181000 audit[4241]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:39.181000 audit[4241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffef765110 a2=0 a3=ffff84accfa8 items=0 ppid=4040 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.181000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:39.181000 audit[4243]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:39.181000 audit[4243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc3b0ae00 a2=0 a3=ffff9e130fa8 items=0 ppid=4040 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.181000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:39.187000 audit[4242]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:39.187000 audit[4242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd4fe09c0 a2=0 a3=ffff8ff56fa8 items=0 ppid=4040 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.187000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:39.190000 audit[4245]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4245 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:39.190000 audit[4245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff80fd3f0 a2=0 a3=ffff8dbbdfa8 items=0 ppid=4040 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.190000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:39.864034 kubelet[2771]: E1213 23:07:39.863990 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:39.865830 kubelet[2771]: E1213 23:07:39.865790 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c669cdcf6-md9gr" podUID="22a73c93-9cd2-420c-82af-38c36ee2bfd3" Dec 13 23:07:39.891000 audit[4275]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:39.891000 audit[4275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff39b96e0 a2=0 a3=1 items=0 ppid=2912 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:39.895000 audit[4275]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:39.895000 audit[4275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff39b96e0 a2=0 a3=1 items=0 ppid=2912 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:39.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:40.015248 systemd-networkd[1296]: vxlan.calico: Gained IPv6LL Dec 13 23:07:40.015673 systemd-networkd[1296]: cali26301bcbc74: Gained IPv6LL Dec 13 23:07:42.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:58806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:42.656188 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:58806.service - OpenSSH per-connection server daemon (10.0.0.1:58806). Dec 13 23:07:42.659811 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 13 23:07:42.659960 kernel: audit: type=1130 audit(1765667262.655:658): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:58806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:42.732000 audit[4291]: USER_ACCT pid=4291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.733588 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 58806 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:07:42.736597 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:07:42.734000 audit[4291]: CRED_ACQ pid=4291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.739849 kernel: audit: type=1101 audit(1765667262.732:659): pid=4291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.739923 kernel: audit: type=1103 audit(1765667262.734:660): pid=4291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.742135 kernel: audit: type=1006 audit(1765667262.734:661): pid=4291 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 23:07:42.734000 audit[4291]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7e37be0 a2=3 a3=0 items=0 ppid=1 pid=4291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:42.745590 systemd-logind[1585]: New session 9 of user core. Dec 13 23:07:42.745906 kernel: audit: type=1300 audit(1765667262.734:661): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7e37be0 a2=3 a3=0 items=0 ppid=1 pid=4291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:42.745932 kernel: audit: type=1327 audit(1765667262.734:661): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:42.734000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:42.751394 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 23:07:42.754000 audit[4291]: USER_START pid=4291 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.760254 kernel: audit: type=1105 audit(1765667262.754:662): pid=4291 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.760326 kernel: audit: type=1103 audit(1765667262.759:663): pid=4295 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.759000 audit[4295]: CRED_ACQ pid=4295 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.958798 sshd[4295]: Connection closed by 10.0.0.1 port 58806 Dec 13 23:07:42.959659 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:42.962000 audit[4291]: USER_END pid=4291 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.966626 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Dec 13 23:07:42.966958 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:58806.service: Deactivated successfully. Dec 13 23:07:42.962000 audit[4291]: CRED_DISP pid=4291 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.969252 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 23:07:42.970228 kernel: audit: type=1106 audit(1765667262.962:664): pid=4291 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.970377 kernel: audit: type=1104 audit(1765667262.962:665): pid=4291 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:42.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:58806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:42.972785 systemd-logind[1585]: Removed session 9. Dec 13 23:07:43.733459 kubelet[2771]: E1213 23:07:43.733280 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:43.734260 containerd[1603]: time="2025-12-13T23:07:43.733784561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hjk8,Uid:c0c4f348-68ea-46b0-9f9e-e194751ec60f,Namespace:kube-system,Attempt:0,}" Dec 13 23:07:43.890337 systemd-networkd[1296]: cali011dc0efa29: Link UP Dec 13 23:07:43.890491 systemd-networkd[1296]: cali011dc0efa29: Gained carrier Dec 13 23:07:43.902904 containerd[1603]: 2025-12-13 23:07:43.819 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0 coredns-674b8bbfcf- kube-system c0c4f348-68ea-46b0-9f9e-e194751ec60f 876 0 2025-12-13 23:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7hjk8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali011dc0efa29 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-" Dec 13 23:07:43.902904 containerd[1603]: 2025-12-13 23:07:43.819 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.902904 containerd[1603]: 2025-12-13 23:07:43.848 [INFO][4324] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" HandleID="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Workload="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.848 [INFO][4324] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" HandleID="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Workload="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7hjk8", "timestamp":"2025-12-13 23:07:43.848775925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.848 [INFO][4324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.849 [INFO][4324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.849 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.859 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" host="localhost" Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.865 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.870 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.872 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.874 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:43.903195 containerd[1603]: 2025-12-13 23:07:43.874 [INFO][4324] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" host="localhost" Dec 13 23:07:43.903474 containerd[1603]: 2025-12-13 23:07:43.876 [INFO][4324] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93 Dec 13 23:07:43.903474 containerd[1603]: 2025-12-13 23:07:43.879 [INFO][4324] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" host="localhost" Dec 13 23:07:43.903474 containerd[1603]: 2025-12-13 23:07:43.885 [INFO][4324] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" host="localhost" Dec 13 23:07:43.903474 containerd[1603]: 2025-12-13 23:07:43.885 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" host="localhost" Dec 13 23:07:43.903474 containerd[1603]: 2025-12-13 23:07:43.885 [INFO][4324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:43.903474 containerd[1603]: 2025-12-13 23:07:43.885 [INFO][4324] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" HandleID="k8s-pod-network.3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Workload="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.903623 containerd[1603]: 2025-12-13 23:07:43.887 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0c4f348-68ea-46b0-9f9e-e194751ec60f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7hjk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali011dc0efa29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:43.903808 containerd[1603]: 2025-12-13 23:07:43.887 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.903808 containerd[1603]: 2025-12-13 23:07:43.887 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali011dc0efa29 ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.903808 containerd[1603]: 2025-12-13 23:07:43.889 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.903923 containerd[1603]: 2025-12-13 23:07:43.889 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0c4f348-68ea-46b0-9f9e-e194751ec60f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93", Pod:"coredns-674b8bbfcf-7hjk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali011dc0efa29", MAC:"be:4e:1f:05:6f:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:43.903923 containerd[1603]: 2025-12-13 23:07:43.900 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hjk8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hjk8-eth0" Dec 13 23:07:43.916000 audit[4343]: NETFILTER_CFG table=filter:127 family=2 entries=42 op=nft_register_chain pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:43.916000 audit[4343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=fffff87f2860 a2=0 a3=ffffbd6befa8 items=0 ppid=4040 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.916000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:43.927828 containerd[1603]: time="2025-12-13T23:07:43.927737334Z" level=info msg="connecting to shim 3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93" address="unix:///run/containerd/s/e8215ef3dd2a3863375902fa977af1fb366eff76846ca793e87705a1af39e120" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:43.959372 systemd[1]: Started cri-containerd-3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93.scope - libcontainer container 3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93. Dec 13 23:07:43.969000 audit: BPF prog-id=215 op=LOAD Dec 13 23:07:43.969000 audit: BPF prog-id=216 op=LOAD Dec 13 23:07:43.969000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.969000 audit: BPF prog-id=216 op=UNLOAD Dec 13 23:07:43.969000 audit[4363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.969000 audit: BPF prog-id=217 op=LOAD Dec 13 23:07:43.969000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.970000 audit: BPF prog-id=218 op=LOAD Dec 13 23:07:43.970000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.970000 audit: BPF prog-id=218 op=UNLOAD Dec 13 23:07:43.970000 audit[4363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.970000 audit: BPF prog-id=217 op=UNLOAD Dec 13 23:07:43.970000 audit[4363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.970000 audit: BPF prog-id=219 op=LOAD Dec 13 23:07:43.970000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4353 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:43.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626566653937343835363365343335383064336136393539656666 Dec 13 23:07:43.971784 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:43.994526 containerd[1603]: time="2025-12-13T23:07:43.994347310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hjk8,Uid:c0c4f348-68ea-46b0-9f9e-e194751ec60f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93\"" Dec 13 23:07:43.995926 kubelet[2771]: E1213 23:07:43.995901 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:43.999753 containerd[1603]: time="2025-12-13T23:07:43.999648942Z" level=info msg="CreateContainer within sandbox \"3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 23:07:44.010796 containerd[1603]: time="2025-12-13T23:07:44.010559370Z" level=info msg="Container 3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:44.013144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973062800.mount: Deactivated successfully. Dec 13 23:07:44.016069 containerd[1603]: time="2025-12-13T23:07:44.016016161Z" level=info msg="CreateContainer within sandbox \"3fbefe9748563e43580d3a6959eff7e6fcb92b61135be486e6130d4e8d0a7d93\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e\"" Dec 13 23:07:44.016900 containerd[1603]: time="2025-12-13T23:07:44.016828713Z" level=info msg="StartContainer for \"3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e\"" Dec 13 23:07:44.017865 containerd[1603]: time="2025-12-13T23:07:44.017807528Z" level=info msg="connecting to shim 3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e" address="unix:///run/containerd/s/e8215ef3dd2a3863375902fa977af1fb366eff76846ca793e87705a1af39e120" protocol=ttrpc version=3 Dec 13 23:07:44.039317 systemd[1]: Started cri-containerd-3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e.scope - libcontainer container 3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e. Dec 13 23:07:44.048000 audit: BPF prog-id=220 op=LOAD Dec 13 23:07:44.049000 audit: BPF prog-id=221 op=LOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.049000 audit: BPF prog-id=221 op=UNLOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.049000 audit: BPF prog-id=222 op=LOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.049000 audit: BPF prog-id=223 op=LOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.049000 audit: BPF prog-id=223 op=UNLOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.049000 audit: BPF prog-id=222 op=UNLOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.049000 audit: BPF prog-id=224 op=LOAD Dec 13 23:07:44.049000 audit[4390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4353 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343533373438653462386663386532306635346132653431353531 Dec 13 23:07:44.064939 containerd[1603]: time="2025-12-13T23:07:44.064856083Z" level=info msg="StartContainer for \"3c453748e4b8fc8e20f54a2e4155119cd10239c4f5f7765bc3f913fd4c97995e\" returns successfully" Dec 13 23:07:44.739033 containerd[1603]: time="2025-12-13T23:07:44.738879410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fcdcb775-d2n7t,Uid:a7cced14-c53b-44b0-9445-694ed7cd5577,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:44.739700 containerd[1603]: time="2025-12-13T23:07:44.739145966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-7hz8c,Uid:485d924b-b146-4fe9-a032-61945d861754,Namespace:calico-apiserver,Attempt:0,}" Dec 13 23:07:44.853466 systemd-networkd[1296]: cali7361ec7854f: Link UP Dec 13 23:07:44.854261 systemd-networkd[1296]: cali7361ec7854f: Gained carrier Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.785 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0 calico-kube-controllers-69fcdcb775- calico-system a7cced14-c53b-44b0-9445-694ed7cd5577 877 0 2025-12-13 23:07:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69fcdcb775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69fcdcb775-d2n7t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7361ec7854f [] [] }} ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.785 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.813 [INFO][4462] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" HandleID="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Workload="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.813 [INFO][4462] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" HandleID="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Workload="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69fcdcb775-d2n7t", "timestamp":"2025-12-13 23:07:44.813710789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.813 [INFO][4462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.813 [INFO][4462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.813 [INFO][4462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.823 [INFO][4462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.829 [INFO][4462] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.833 [INFO][4462] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.835 [INFO][4462] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.837 [INFO][4462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.837 [INFO][4462] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.839 [INFO][4462] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.843 [INFO][4462] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.848 [INFO][4462] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.848 [INFO][4462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" host="localhost" Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.848 [INFO][4462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:44.868792 containerd[1603]: 2025-12-13 23:07:44.848 [INFO][4462] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" HandleID="k8s-pod-network.631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Workload="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.869398 containerd[1603]: 2025-12-13 23:07:44.850 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0", GenerateName:"calico-kube-controllers-69fcdcb775-", Namespace:"calico-system", SelfLink:"", UID:"a7cced14-c53b-44b0-9445-694ed7cd5577", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69fcdcb775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69fcdcb775-d2n7t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7361ec7854f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:44.869398 containerd[1603]: 2025-12-13 23:07:44.851 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.869398 containerd[1603]: 2025-12-13 23:07:44.851 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7361ec7854f ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.869398 containerd[1603]: 2025-12-13 23:07:44.854 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.869398 containerd[1603]: 2025-12-13 23:07:44.855 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0", GenerateName:"calico-kube-controllers-69fcdcb775-", Namespace:"calico-system", SelfLink:"", UID:"a7cced14-c53b-44b0-9445-694ed7cd5577", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69fcdcb775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f", Pod:"calico-kube-controllers-69fcdcb775-d2n7t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7361ec7854f", MAC:"62:6f:5b:fa:a7:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:44.869398 containerd[1603]: 2025-12-13 23:07:44.866 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" Namespace="calico-system" Pod="calico-kube-controllers-69fcdcb775-d2n7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69fcdcb775--d2n7t-eth0" Dec 13 23:07:44.877179 kubelet[2771]: E1213 23:07:44.877061 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:44.892000 audit[4487]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:44.892000 audit[4487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffd17288a0 a2=0 a3=ffff9f9c8fa8 items=0 ppid=4040 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:44.904307 containerd[1603]: time="2025-12-13T23:07:44.904261371Z" level=info msg="connecting to shim 631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f" address="unix:///run/containerd/s/557f5dbbc1feea763a588c061817a22bea886352ba1b0d3907d1378d941ef081" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:44.916398 kubelet[2771]: I1213 23:07:44.916081 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7hjk8" podStartSLOduration=34.916062275 podStartE2EDuration="34.916062275s" podCreationTimestamp="2025-12-13 23:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:07:44.894086931 +0000 UTC m=+40.277295173" watchObservedRunningTime="2025-12-13 23:07:44.916062275 +0000 UTC m=+40.299270517" Dec 13 23:07:44.922000 audit[4512]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:44.922000 audit[4512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd20c0c40 a2=0 a3=1 items=0 ppid=2912 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:44.927000 audit[4512]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:44.927000 audit[4512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd20c0c40 a2=0 a3=1 items=0 ppid=2912 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:44.940339 systemd[1]: Started cri-containerd-631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f.scope - libcontainer container 631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f. Dec 13 23:07:44.948000 audit[4531]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:44.948000 audit[4531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc3fe3bd0 a2=0 a3=1 items=0 ppid=2912 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:44.954000 audit[4531]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:44.954000 audit[4531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc3fe3bd0 a2=0 a3=1 items=0 ppid=2912 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:44.958000 audit: BPF prog-id=225 op=LOAD Dec 13 23:07:44.960000 audit: BPF prog-id=226 op=LOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.960000 audit: BPF prog-id=226 op=UNLOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.960000 audit: BPF prog-id=227 op=LOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.960000 audit: BPF prog-id=228 op=LOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.960000 audit: BPF prog-id=228 op=UNLOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.960000 audit: BPF prog-id=227 op=UNLOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.960000 audit: BPF prog-id=229 op=LOAD Dec 13 23:07:44.960000 audit[4511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4498 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:44.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316339393361343831316639333966323430316166396330653164 Dec 13 23:07:44.963435 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:44.985255 systemd-networkd[1296]: cali986b1e3b3c7: Link UP Dec 13 23:07:44.985502 systemd-networkd[1296]: cali986b1e3b3c7: Gained carrier Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.786 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0 calico-apiserver-567b4f6b5- calico-apiserver 485d924b-b146-4fe9-a032-61945d861754 869 0 2025-12-13 23:07:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567b4f6b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567b4f6b5-7hz8c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali986b1e3b3c7 [] [] }} ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.787 [INFO][4434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.815 [INFO][4461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" HandleID="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Workload="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.815 [INFO][4461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" HandleID="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Workload="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567b4f6b5-7hz8c", "timestamp":"2025-12-13 23:07:44.81517371 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.815 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.848 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.849 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.925 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.933 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.942 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.945 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.950 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.950 [INFO][4461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.952 [INFO][4461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.958 [INFO][4461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.967 [INFO][4461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.967 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" host="localhost" Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.967 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:45.009991 containerd[1603]: 2025-12-13 23:07:44.967 [INFO][4461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" HandleID="k8s-pod-network.8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Workload="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.011375 containerd[1603]: 2025-12-13 23:07:44.977 [INFO][4434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0", GenerateName:"calico-apiserver-567b4f6b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"485d924b-b146-4fe9-a032-61945d861754", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567b4f6b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567b4f6b5-7hz8c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali986b1e3b3c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:45.011375 containerd[1603]: 2025-12-13 23:07:44.977 [INFO][4434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.011375 containerd[1603]: 2025-12-13 23:07:44.977 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali986b1e3b3c7 ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.011375 containerd[1603]: 2025-12-13 23:07:44.982 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.011375 containerd[1603]: 2025-12-13 23:07:44.982 [INFO][4434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0", GenerateName:"calico-apiserver-567b4f6b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"485d924b-b146-4fe9-a032-61945d861754", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567b4f6b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff", Pod:"calico-apiserver-567b4f6b5-7hz8c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali986b1e3b3c7", MAC:"1e:6c:81:be:0a:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:45.011375 containerd[1603]: 2025-12-13 23:07:45.006 [INFO][4434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-7hz8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--7hz8c-eth0" Dec 13 23:07:45.029000 audit[4542]: NETFILTER_CFG table=filter:133 family=2 entries=58 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:45.029000 audit[4542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=ffffe8eee5e0 a2=0 a3=ffffb1d7ffa8 items=0 ppid=4040 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.029000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:45.037140 containerd[1603]: time="2025-12-13T23:07:45.036615239Z" level=info msg="connecting to shim 8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff" address="unix:///run/containerd/s/73b4d4b436bc5d92d7585cd3f419887fe92c3c60f42354ae626437bfe1375db5" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:45.065608 systemd[1]: Started cri-containerd-8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff.scope - libcontainer container 8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff. Dec 13 23:07:45.068015 containerd[1603]: time="2025-12-13T23:07:45.067981189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fcdcb775-d2n7t,Uid:a7cced14-c53b-44b0-9445-694ed7cd5577,Namespace:calico-system,Attempt:0,} returns sandbox id \"631c993a4811f939f2401af9c0e1da0e7176a014135fe3320670e34d6504307f\"" Dec 13 23:07:45.075000 audit: BPF prog-id=230 op=LOAD Dec 13 23:07:45.076000 audit: BPF prog-id=231 op=LOAD Dec 13 23:07:45.076000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.076000 audit: BPF prog-id=231 op=UNLOAD Dec 13 23:07:45.076000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.077000 audit: BPF prog-id=232 op=LOAD Dec 13 23:07:45.077000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.077000 audit: BPF prog-id=233 op=LOAD Dec 13 23:07:45.077000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.079133 containerd[1603]: time="2025-12-13T23:07:45.079013303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 23:07:45.078000 audit: BPF prog-id=233 op=UNLOAD Dec 13 23:07:45.078000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.078000 audit: BPF prog-id=232 op=UNLOAD Dec 13 23:07:45.078000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.078000 audit: BPF prog-id=234 op=LOAD Dec 13 23:07:45.078000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4552 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343266363130363133393363323332326666333333373539633139 Dec 13 23:07:45.080930 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:45.103282 containerd[1603]: time="2025-12-13T23:07:45.103236778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-7hz8c,Uid:485d924b-b146-4fe9-a032-61945d861754,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8742f61061393c2322ff333759c1979bab8945b34b219773bcfcbfea3b21a5ff\"" Dec 13 23:07:45.278962 containerd[1603]: time="2025-12-13T23:07:45.278078013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:45.279991 containerd[1603]: time="2025-12-13T23:07:45.279948623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 23:07:45.280233 containerd[1603]: time="2025-12-13T23:07:45.280019913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:45.280588 kubelet[2771]: E1213 23:07:45.280412 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 23:07:45.280588 kubelet[2771]: E1213 23:07:45.280457 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 23:07:45.280958 containerd[1603]: time="2025-12-13T23:07:45.280752971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 23:07:45.281764 kubelet[2771]: E1213 23:07:45.280744 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8vrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69fcdcb775-d2n7t_calico-system(a7cced14-c53b-44b0-9445-694ed7cd5577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:45.282957 kubelet[2771]: E1213 23:07:45.282908 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:07:45.455281 systemd-networkd[1296]: cali011dc0efa29: Gained IPv6LL Dec 13 23:07:45.496825 containerd[1603]: time="2025-12-13T23:07:45.496761504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:45.530433 containerd[1603]: time="2025-12-13T23:07:45.530297664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 23:07:45.530433 containerd[1603]: time="2025-12-13T23:07:45.530362033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:45.530582 kubelet[2771]: E1213 23:07:45.530548 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:07:45.530647 kubelet[2771]: E1213 23:07:45.530593 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:07:45.530781 kubelet[2771]: E1213 23:07:45.530727 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st5j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-567b4f6b5-7hz8c_calico-apiserver(485d924b-b146-4fe9-a032-61945d861754): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:45.532296 kubelet[2771]: E1213 23:07:45.532238 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:07:45.733383 kubelet[2771]: E1213 23:07:45.733241 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:45.733866 containerd[1603]: time="2025-12-13T23:07:45.733827691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fcgnt,Uid:749ee60f-8b5b-4a24-9e66-ab82e119fd2c,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:45.733924 containerd[1603]: time="2025-12-13T23:07:45.733879698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvkzj,Uid:d2d46229-e483-4a67-a12f-24b1342fa667,Namespace:kube-system,Attempt:0,}" Dec 13 23:07:45.734169 containerd[1603]: time="2025-12-13T23:07:45.734094767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xf56,Uid:68e4691c-7de1-4668-91bc-eef5c31432bb,Namespace:calico-system,Attempt:0,}" Dec 13 23:07:45.881035 kubelet[2771]: E1213 23:07:45.880859 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:45.881035 kubelet[2771]: E1213 23:07:45.880913 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:07:45.881609 kubelet[2771]: E1213 23:07:45.881274 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:07:45.942000 audit[4637]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:45.942000 audit[4637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd36266a0 a2=0 a3=1 items=0 ppid=2912 pid=4637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.942000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:45.951000 audit[4637]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=4637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:45.951000 audit[4637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd36266a0 a2=0 a3=1 items=0 ppid=2912 pid=4637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:45.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:46.045317 systemd-networkd[1296]: calicfffbfd1e14: Link UP Dec 13 23:07:46.045516 systemd-networkd[1296]: calicfffbfd1e14: Gained carrier Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.969 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--fcgnt-eth0 goldmane-666569f655- calico-system 749ee60f-8b5b-4a24-9e66-ab82e119fd2c 881 0 2025-12-13 23:07:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-fcgnt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicfffbfd1e14 [] [] }} ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.969 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.998 [INFO][4646] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" HandleID="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Workload="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.998 [INFO][4646] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" HandleID="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Workload="localhost-k8s-goldmane--666569f655--fcgnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-fcgnt", "timestamp":"2025-12-13 23:07:45.998248612 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.998 [INFO][4646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.998 [INFO][4646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:45.998 [INFO][4646] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.010 [INFO][4646] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.014 [INFO][4646] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.019 [INFO][4646] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.020 [INFO][4646] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.023 [INFO][4646] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.023 [INFO][4646] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.024 [INFO][4646] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5 Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.034 [INFO][4646] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.039 [INFO][4646] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.039 [INFO][4646] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" host="localhost" Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.039 [INFO][4646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:46.065813 containerd[1603]: 2025-12-13 23:07:46.039 [INFO][4646] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" HandleID="k8s-pod-network.0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Workload="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.066538 containerd[1603]: 2025-12-13 23:07:46.041 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fcgnt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"749ee60f-8b5b-4a24-9e66-ab82e119fd2c", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-fcgnt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicfffbfd1e14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:46.066538 containerd[1603]: 2025-12-13 23:07:46.041 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.066538 containerd[1603]: 2025-12-13 23:07:46.041 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfffbfd1e14 ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.066538 containerd[1603]: 2025-12-13 23:07:46.043 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.066538 containerd[1603]: 2025-12-13 23:07:46.043 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fcgnt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"749ee60f-8b5b-4a24-9e66-ab82e119fd2c", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5", Pod:"goldmane-666569f655-fcgnt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicfffbfd1e14", MAC:"92:23:a9:77:c1:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:46.066538 containerd[1603]: 2025-12-13 23:07:46.063 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" Namespace="calico-system" Pod="goldmane-666569f655-fcgnt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fcgnt-eth0" Dec 13 23:07:46.077000 audit[4681]: NETFILTER_CFG table=filter:136 family=2 entries=56 op=nft_register_chain pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:46.077000 audit[4681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28744 a0=3 a1=ffffc63e9520 a2=0 a3=ffffb2638fa8 items=0 ppid=4040 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.077000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:46.088389 containerd[1603]: time="2025-12-13T23:07:46.088354756Z" level=info msg="connecting to shim 0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5" address="unix:///run/containerd/s/776d68587251426856e1025ab735eb59965c5d8964b7ae1276552c56c9ce2ea4" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:46.115315 systemd[1]: Started cri-containerd-0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5.scope - libcontainer container 0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5. Dec 13 23:07:46.133000 audit: BPF prog-id=235 op=LOAD Dec 13 23:07:46.133000 audit: BPF prog-id=236 op=LOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c180 a2=98 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.133000 audit: BPF prog-id=236 op=UNLOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.133000 audit: BPF prog-id=237 op=LOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c3e8 a2=98 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.133000 audit: BPF prog-id=238 op=LOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400020c168 a2=98 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.133000 audit: BPF prog-id=238 op=UNLOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.133000 audit: BPF prog-id=237 op=UNLOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.133000 audit: BPF prog-id=239 op=LOAD Dec 13 23:07:46.133000 audit[4701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c648 a2=98 a3=0 items=0 ppid=4691 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036383435343266633235316535313838623065666230616239396664 Dec 13 23:07:46.136075 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:46.150159 systemd-networkd[1296]: calia039ab3be54: Link UP Dec 13 23:07:46.150336 systemd-networkd[1296]: calia039ab3be54: Gained carrier Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:45.967 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0 coredns-674b8bbfcf- kube-system d2d46229-e483-4a67-a12f-24b1342fa667 875 0 2025-12-13 23:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pvkzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia039ab3be54 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:45.967 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.002 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" HandleID="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Workload="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.003 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" HandleID="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Workload="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pvkzj", "timestamp":"2025-12-13 23:07:46.0026986 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.003 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.039 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.039 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.112 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.117 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.123 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.125 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.129 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.129 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.130 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.135 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.142 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.142 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" host="localhost" Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.142 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:46.167080 containerd[1603]: 2025-12-13 23:07:46.142 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" HandleID="k8s-pod-network.434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Workload="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.167668 containerd[1603]: 2025-12-13 23:07:46.147 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2d46229-e483-4a67-a12f-24b1342fa667", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pvkzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia039ab3be54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:46.167668 containerd[1603]: 2025-12-13 23:07:46.147 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.167668 containerd[1603]: 2025-12-13 23:07:46.147 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia039ab3be54 ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.167668 containerd[1603]: 2025-12-13 23:07:46.149 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.167668 containerd[1603]: 2025-12-13 23:07:46.150 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2d46229-e483-4a67-a12f-24b1342fa667", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe", Pod:"coredns-674b8bbfcf-pvkzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia039ab3be54", MAC:"d2:87:e9:05:32:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:46.167668 containerd[1603]: 2025-12-13 23:07:46.164 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvkzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvkzj-eth0" Dec 13 23:07:46.178454 containerd[1603]: time="2025-12-13T23:07:46.178387520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fcgnt,Uid:749ee60f-8b5b-4a24-9e66-ab82e119fd2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0684542fc251e5188b0efb0ab99fd8671df69507c2a6ac83ac188e1a089659e5\"" Dec 13 23:07:46.180191 containerd[1603]: time="2025-12-13T23:07:46.180147749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 23:07:46.183000 audit[4737]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:46.183000 audit[4737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22720 a0=3 a1=ffffc2566170 a2=0 a3=ffff99c91fa8 items=0 ppid=4040 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.183000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:46.194843 containerd[1603]: time="2025-12-13T23:07:46.194806731Z" level=info msg="connecting to shim 434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe" address="unix:///run/containerd/s/ab1c0b2d1999d5d9fbe6eac1994bc9812495a82482dd459307e70c44a7aaf347" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:46.221342 systemd[1]: Started cri-containerd-434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe.scope - libcontainer container 434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe. Dec 13 23:07:46.234000 audit: BPF prog-id=240 op=LOAD Dec 13 23:07:46.235000 audit: BPF prog-id=241 op=LOAD Dec 13 23:07:46.235000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.235000 audit: BPF prog-id=241 op=UNLOAD Dec 13 23:07:46.235000 audit[4757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.236000 audit: BPF prog-id=242 op=LOAD Dec 13 23:07:46.236000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.236000 audit: BPF prog-id=243 op=LOAD Dec 13 23:07:46.236000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.236000 audit: BPF prog-id=243 op=UNLOAD Dec 13 23:07:46.236000 audit[4757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.236000 audit: BPF prog-id=242 op=UNLOAD Dec 13 23:07:46.236000 audit[4757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.236000 audit: BPF prog-id=244 op=LOAD Dec 13 23:07:46.236000 audit[4757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433346336653531316532386161653734656662386334656436373230 Dec 13 23:07:46.238529 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:46.247322 systemd-networkd[1296]: calie7bbd2a0f47: Link UP Dec 13 23:07:46.247455 systemd-networkd[1296]: calie7bbd2a0f47: Gained carrier Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:45.977 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8xf56-eth0 csi-node-driver- calico-system 68e4691c-7de1-4668-91bc-eef5c31432bb 772 0 2025-12-13 23:07:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8xf56 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie7bbd2a0f47 [] [] }} ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:45.977 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.009 [INFO][4655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" HandleID="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Workload="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.009 [INFO][4655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" HandleID="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Workload="localhost-k8s-csi--node--driver--8xf56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8xf56", "timestamp":"2025-12-13 23:07:46.009442555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.009 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.142 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.142 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.213 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.220 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.225 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.227 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.229 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.229 [INFO][4655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.231 [INFO][4655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.234 [INFO][4655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.242 [INFO][4655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.242 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" host="localhost" Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.242 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:46.262704 containerd[1603]: 2025-12-13 23:07:46.243 [INFO][4655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" HandleID="k8s-pod-network.975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Workload="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.263255 containerd[1603]: 2025-12-13 23:07:46.245 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8xf56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68e4691c-7de1-4668-91bc-eef5c31432bb", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8xf56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7bbd2a0f47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:46.263255 containerd[1603]: 2025-12-13 23:07:46.245 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.263255 containerd[1603]: 2025-12-13 23:07:46.245 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7bbd2a0f47 ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.263255 containerd[1603]: 2025-12-13 23:07:46.247 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.263255 containerd[1603]: 2025-12-13 23:07:46.248 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8xf56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68e4691c-7de1-4668-91bc-eef5c31432bb", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe", Pod:"csi-node-driver-8xf56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7bbd2a0f47", MAC:"22:c4:82:60:f4:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:46.263255 containerd[1603]: 2025-12-13 23:07:46.259 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" Namespace="calico-system" Pod="csi-node-driver-8xf56" WorkloadEndpoint="localhost-k8s-csi--node--driver--8xf56-eth0" Dec 13 23:07:46.273232 containerd[1603]: time="2025-12-13T23:07:46.273186463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvkzj,Uid:d2d46229-e483-4a67-a12f-24b1342fa667,Namespace:kube-system,Attempt:0,} returns sandbox id \"434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe\"" Dec 13 23:07:46.274444 kubelet[2771]: E1213 23:07:46.274417 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:46.279140 containerd[1603]: time="2025-12-13T23:07:46.279087189Z" level=info msg="CreateContainer within sandbox \"434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 23:07:46.278000 audit[4790]: NETFILTER_CFG table=filter:138 family=2 entries=56 op=nft_register_chain pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:46.278000 audit[4790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25516 a0=3 a1=ffffdcc4b3e0 a2=0 a3=ffff96fbcfa8 items=0 ppid=4040 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.278000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:46.284945 containerd[1603]: time="2025-12-13T23:07:46.284749924Z" level=info msg="connecting to shim 975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe" address="unix:///run/containerd/s/d7a9f13c1f5e4f351a84acbdaf1d9c1dc5260ce365706c94c0feb5758b4a7e5d" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:46.287023 containerd[1603]: time="2025-12-13T23:07:46.286996095Z" level=info msg="Container 82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:07:46.294250 containerd[1603]: time="2025-12-13T23:07:46.294216272Z" level=info msg="CreateContainer within sandbox \"434c6e511e28aae74efb8c4ed6720a0232ede3a03f3e4e2a01354640d7c101fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04\"" Dec 13 23:07:46.294968 containerd[1603]: time="2025-12-13T23:07:46.294936126Z" level=info msg="StartContainer for \"82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04\"" Dec 13 23:07:46.295831 containerd[1603]: time="2025-12-13T23:07:46.295805159Z" level=info msg="connecting to shim 82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04" address="unix:///run/containerd/s/ab1c0b2d1999d5d9fbe6eac1994bc9812495a82482dd459307e70c44a7aaf347" protocol=ttrpc version=3 Dec 13 23:07:46.320396 systemd[1]: Started cri-containerd-82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04.scope - libcontainer container 82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04. Dec 13 23:07:46.321854 systemd[1]: Started cri-containerd-975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe.scope - libcontainer container 975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe. Dec 13 23:07:46.330000 audit: BPF prog-id=245 op=LOAD Dec 13 23:07:46.331000 audit: BPF prog-id=246 op=LOAD Dec 13 23:07:46.331000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.331000 audit: BPF prog-id=246 op=UNLOAD Dec 13 23:07:46.331000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.331000 audit: BPF prog-id=247 op=LOAD Dec 13 23:07:46.331000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.332000 audit: BPF prog-id=248 op=LOAD Dec 13 23:07:46.332000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.332000 audit: BPF prog-id=248 op=UNLOAD Dec 13 23:07:46.332000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.332000 audit: BPF prog-id=247 op=UNLOAD Dec 13 23:07:46.332000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.332000 audit: BPF prog-id=249 op=LOAD Dec 13 23:07:46.332000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4746 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633638313737376165343037303832306632366234666366363337 Dec 13 23:07:46.332000 audit: BPF prog-id=250 op=LOAD Dec 13 23:07:46.333000 audit: BPF prog-id=251 op=LOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.333000 audit: BPF prog-id=251 op=UNLOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.333000 audit: BPF prog-id=252 op=LOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.333000 audit: BPF prog-id=253 op=LOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.333000 audit: BPF prog-id=253 op=UNLOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.333000 audit: BPF prog-id=252 op=UNLOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.333000 audit: BPF prog-id=254 op=LOAD Dec 13 23:07:46.333000 audit[4810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4799 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353039376336356661383861633533616564313266306332303766 Dec 13 23:07:46.335736 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:46.351573 containerd[1603]: time="2025-12-13T23:07:46.351536671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xf56,Uid:68e4691c-7de1-4668-91bc-eef5c31432bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"975097c65fa88ac53aed12f0c207fe86ed39c6c90fafc99e0fd5cc251458e8fe\"" Dec 13 23:07:46.356012 containerd[1603]: time="2025-12-13T23:07:46.355893077Z" level=info msg="StartContainer for \"82c681777ae4070820f26b4fcf637cadd7af3e62a9aeb8333e2d3e76cbe71a04\" returns successfully" Dec 13 23:07:46.392288 containerd[1603]: time="2025-12-13T23:07:46.391444690Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:46.393027 containerd[1603]: time="2025-12-13T23:07:46.392959327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 23:07:46.393310 containerd[1603]: time="2025-12-13T23:07:46.393022895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:46.393514 kubelet[2771]: E1213 23:07:46.393486 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 23:07:46.393557 kubelet[2771]: E1213 23:07:46.393529 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 23:07:46.393790 kubelet[2771]: E1213 23:07:46.393745 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm69d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fcgnt_calico-system(749ee60f-8b5b-4a24-9e66-ab82e119fd2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:46.394616 containerd[1603]: time="2025-12-13T23:07:46.394490686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 23:07:46.395773 kubelet[2771]: E1213 23:07:46.395728 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c" Dec 13 23:07:46.543279 systemd-networkd[1296]: cali986b1e3b3c7: Gained IPv6LL Dec 13 23:07:46.605953 containerd[1603]: time="2025-12-13T23:07:46.605897641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:46.607195 systemd-networkd[1296]: cali7361ec7854f: Gained IPv6LL Dec 13 23:07:46.607868 containerd[1603]: time="2025-12-13T23:07:46.607788527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 23:07:46.607933 containerd[1603]: time="2025-12-13T23:07:46.607871338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:46.608207 kubelet[2771]: E1213 23:07:46.608090 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 23:07:46.608207 kubelet[2771]: E1213 23:07:46.608160 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 23:07:46.609231 kubelet[2771]: E1213 23:07:46.608300 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:46.610895 containerd[1603]: time="2025-12-13T23:07:46.610853685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 23:07:46.821810 containerd[1603]: time="2025-12-13T23:07:46.821748134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:46.834322 containerd[1603]: time="2025-12-13T23:07:46.834231434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 23:07:46.834322 containerd[1603]: time="2025-12-13T23:07:46.834309564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:46.835212 kubelet[2771]: E1213 23:07:46.835166 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 23:07:46.835273 kubelet[2771]: E1213 23:07:46.835221 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 23:07:46.835477 kubelet[2771]: E1213 23:07:46.835357 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:46.836596 kubelet[2771]: E1213 23:07:46.836555 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:46.887682 kubelet[2771]: E1213 23:07:46.887549 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:46.888983 kubelet[2771]: E1213 23:07:46.888934 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:46.890852 kubelet[2771]: E1213 23:07:46.890821 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:46.891129 kubelet[2771]: E1213 23:07:46.891080 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c" Dec 13 23:07:46.891794 kubelet[2771]: E1213 23:07:46.891756 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:07:46.894759 kubelet[2771]: E1213 23:07:46.891675 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:07:46.977000 audit[4870]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:46.977000 audit[4870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdb24d890 a2=0 a3=1 items=0 ppid=2912 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:46.982000 audit[4870]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:46.982000 audit[4870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdb24d890 a2=0 a3=1 items=0 ppid=2912 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:46.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:47.503278 systemd-networkd[1296]: calicfffbfd1e14: Gained IPv6LL Dec 13 23:07:47.733987 containerd[1603]: time="2025-12-13T23:07:47.733937584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-cd8w4,Uid:e9f6be97-9073-49f4-b46f-97add2dc7d48,Namespace:calico-apiserver,Attempt:0,}" Dec 13 23:07:47.823247 systemd-networkd[1296]: calie7bbd2a0f47: Gained IPv6LL Dec 13 23:07:47.856605 systemd-networkd[1296]: cali2d30238b596: Link UP Dec 13 23:07:47.857227 systemd-networkd[1296]: cali2d30238b596: Gained carrier Dec 13 23:07:47.868430 kubelet[2771]: I1213 23:07:47.868322 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pvkzj" podStartSLOduration=37.868303543 podStartE2EDuration="37.868303543s" podCreationTimestamp="2025-12-13 23:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:07:46.97244141 +0000 UTC m=+42.355649692" watchObservedRunningTime="2025-12-13 23:07:47.868303543 +0000 UTC m=+43.251511785" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.772 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0 calico-apiserver-567b4f6b5- calico-apiserver e9f6be97-9073-49f4-b46f-97add2dc7d48 879 0 2025-12-13 23:07:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567b4f6b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567b4f6b5-cd8w4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d30238b596 [] [] }} ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.772 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.802 [INFO][4886] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" HandleID="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Workload="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.802 [INFO][4886] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" HandleID="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Workload="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567b4f6b5-cd8w4", "timestamp":"2025-12-13 23:07:47.802802716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.803 [INFO][4886] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.803 [INFO][4886] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.803 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.815 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.822 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.830 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.832 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.834 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.834 [INFO][4886] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.835 [INFO][4886] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8 Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.845 [INFO][4886] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.851 [INFO][4886] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.851 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" host="localhost" Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.851 [INFO][4886] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 23:07:47.870846 containerd[1603]: 2025-12-13 23:07:47.851 [INFO][4886] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" HandleID="k8s-pod-network.84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Workload="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.871317 containerd[1603]: 2025-12-13 23:07:47.854 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0", GenerateName:"calico-apiserver-567b4f6b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9f6be97-9073-49f4-b46f-97add2dc7d48", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567b4f6b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567b4f6b5-cd8w4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d30238b596", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:47.871317 containerd[1603]: 2025-12-13 23:07:47.854 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.871317 containerd[1603]: 2025-12-13 23:07:47.854 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d30238b596 ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.871317 containerd[1603]: 2025-12-13 23:07:47.857 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.871317 containerd[1603]: 2025-12-13 23:07:47.857 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0", GenerateName:"calico-apiserver-567b4f6b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9f6be97-9073-49f4-b46f-97add2dc7d48", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 23, 7, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567b4f6b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8", Pod:"calico-apiserver-567b4f6b5-cd8w4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d30238b596", MAC:"56:cc:30:c6:09:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 23:07:47.871317 containerd[1603]: 2025-12-13 23:07:47.867 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" Namespace="calico-apiserver" Pod="calico-apiserver-567b4f6b5-cd8w4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567b4f6b5--cd8w4-eth0" Dec 13 23:07:47.883335 kernel: kauditd_printk_skb: 219 callbacks suppressed Dec 13 23:07:47.883429 kernel: audit: type=1325 audit(1765667267.880:745): table=filter:141 family=2 entries=67 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:47.880000 audit[4902]: NETFILTER_CFG table=filter:141 family=2 entries=67 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 23:07:47.880000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31868 a0=3 a1=fffff82ab770 a2=0 a3=ffff8e7b3fa8 items=0 ppid=4040 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.887477 systemd-networkd[1296]: calia039ab3be54: Gained IPv6LL Dec 13 23:07:47.889027 kernel: audit: type=1300 audit(1765667267.880:745): arch=c00000b7 syscall=211 success=yes exit=31868 a0=3 a1=fffff82ab770 a2=0 a3=ffff8e7b3fa8 items=0 ppid=4040 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.880000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:47.893622 kernel: audit: type=1327 audit(1765667267.880:745): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 23:07:47.893897 containerd[1603]: time="2025-12-13T23:07:47.893835965Z" level=info msg="connecting to shim 84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8" address="unix:///run/containerd/s/9617d71d9a438dcf28da20b88c72ce71d0706a6b233b2cc4bc397c15ccf92891" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:07:47.895690 kubelet[2771]: E1213 23:07:47.895604 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:47.896190 kubelet[2771]: E1213 23:07:47.896055 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c" Dec 13 23:07:47.896190 kubelet[2771]: E1213 23:07:47.896076 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:07:47.921319 systemd[1]: Started cri-containerd-84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8.scope - libcontainer container 84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8. Dec 13 23:07:47.933000 audit: BPF prog-id=255 op=LOAD Dec 13 23:07:47.934000 audit: BPF prog-id=256 op=LOAD Dec 13 23:07:47.936797 kernel: audit: type=1334 audit(1765667267.933:746): prog-id=255 op=LOAD Dec 13 23:07:47.936857 kernel: audit: type=1334 audit(1765667267.934:747): prog-id=256 op=LOAD Dec 13 23:07:47.936888 kernel: audit: type=1300 audit(1765667267.934:747): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.934000 audit[4923]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.943510 kernel: audit: type=1327 audit(1765667267.934:747): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.943915 kernel: audit: type=1334 audit(1765667267.934:748): prog-id=256 op=UNLOAD Dec 13 23:07:47.934000 audit: BPF prog-id=256 op=UNLOAD Dec 13 23:07:47.944518 kernel: audit: type=1300 audit(1765667267.934:748): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.934000 audit[4923]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.944837 systemd-resolved[1261]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:07:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.950643 kernel: audit: type=1327 audit(1765667267.934:748): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.934000 audit: BPF prog-id=257 op=LOAD Dec 13 23:07:47.934000 audit[4923]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.935000 audit: BPF prog-id=258 op=LOAD Dec 13 23:07:47.935000 audit[4923]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.939000 audit: BPF prog-id=258 op=UNLOAD Dec 13 23:07:47.939000 audit[4923]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.939000 audit: BPF prog-id=257 op=UNLOAD Dec 13 23:07:47.939000 audit[4923]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.939000 audit: BPF prog-id=259 op=LOAD Dec 13 23:07:47.939000 audit[4923]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4911 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834663037356431366664393737623366393237306533313338653738 Dec 13 23:07:47.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:58810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:47.972508 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:58810.service - OpenSSH per-connection server daemon (10.0.0.1:58810). Dec 13 23:07:47.980158 containerd[1603]: time="2025-12-13T23:07:47.979619233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567b4f6b5-cd8w4,Uid:e9f6be97-9073-49f4-b46f-97add2dc7d48,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"84f075d16fd977b3f9270e3138e7827e9e62269a21dfef7735f6b5b71028bba8\"" Dec 13 23:07:47.982111 containerd[1603]: time="2025-12-13T23:07:47.982075983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 23:07:47.996000 audit[4952]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:47.996000 audit[4952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff06eff80 a2=0 a3=1 items=0 ppid=2912 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:47.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:48.006000 audit[4952]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:48.006000 audit[4952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff06eff80 a2=0 a3=1 items=0 ppid=2912 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:48.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:48.040000 audit[4945]: USER_ACCT pid=4945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:48.041459 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 58810 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:07:48.041000 audit[4945]: CRED_ACQ pid=4945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:48.041000 audit[4945]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe610ef60 a2=3 a3=0 items=0 ppid=1 pid=4945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:48.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:48.043312 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:07:48.047566 systemd-logind[1585]: New session 10 of user core. Dec 13 23:07:48.056327 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 23:07:48.057000 audit[4945]: USER_START pid=4945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:48.059000 audit[4957]: CRED_ACQ pid=4957 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:48.189863 containerd[1603]: time="2025-12-13T23:07:48.189748564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:48.191347 containerd[1603]: time="2025-12-13T23:07:48.191292433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:48.191481 containerd[1603]: time="2025-12-13T23:07:48.191379724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 23:07:48.191799 kubelet[2771]: E1213 23:07:48.191745 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:07:48.191877 kubelet[2771]: E1213 23:07:48.191799 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:07:48.192052 kubelet[2771]: E1213 23:07:48.191926 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-567b4f6b5-cd8w4_calico-apiserver(e9f6be97-9073-49f4-b46f-97add2dc7d48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:48.193083 kubelet[2771]: E1213 23:07:48.193037 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:07:48.197616 sshd[4957]: Connection closed by 10.0.0.1 port 58810 Dec 13 23:07:48.197295 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:48.197000 audit[4945]: USER_END pid=4945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:48.197000 audit[4945]: CRED_DISP pid=4945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:48.201701 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:58810.service: Deactivated successfully. Dec 13 23:07:48.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:58810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:48.203751 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 23:07:48.204921 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Dec 13 23:07:48.206573 systemd-logind[1585]: Removed session 10. Dec 13 23:07:48.897749 kubelet[2771]: E1213 23:07:48.897709 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:07:48.898386 kubelet[2771]: E1213 23:07:48.898311 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:07:48.917000 audit[4972]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:48.917000 audit[4972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe33f7100 a2=0 a3=1 items=0 ppid=2912 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:48.917000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:48.927000 audit[4972]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:07:48.927000 audit[4972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe33f7100 a2=0 a3=1 items=0 ppid=2912 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:48.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:07:49.423292 systemd-networkd[1296]: cali2d30238b596: Gained IPv6LL Dec 13 23:07:49.900921 kubelet[2771]: E1213 23:07:49.900713 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:07:53.215992 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:48476.service - OpenSSH per-connection server daemon (10.0.0.1:48476). Dec 13 23:07:53.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:48476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.223321 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 13 23:07:53.223391 kernel: audit: type=1130 audit(1765667273.216:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:48476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.271000 audit[4982]: USER_ACCT pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.272637 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 48476 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:07:53.275059 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:07:53.274000 audit[4982]: CRED_ACQ pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.279021 kernel: audit: type=1101 audit(1765667273.271:768): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.279077 kernel: audit: type=1103 audit(1765667273.274:769): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.281395 kernel: audit: type=1006 audit(1765667273.274:770): pid=4982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 23:07:53.274000 audit[4982]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc037060 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:53.283772 systemd-logind[1585]: New session 11 of user core. Dec 13 23:07:53.285206 kernel: audit: type=1300 audit(1765667273.274:770): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc037060 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:53.285264 kernel: audit: type=1327 audit(1765667273.274:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:53.274000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:53.292295 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 23:07:53.294000 audit[4982]: USER_START pid=4982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.296000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.301823 kernel: audit: type=1105 audit(1765667273.294:771): pid=4982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.301902 kernel: audit: type=1103 audit(1765667273.296:772): pid=4986 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.419376 sshd[4986]: Connection closed by 10.0.0.1 port 48476 Dec 13 23:07:53.420267 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:53.421000 audit[4982]: USER_END pid=4982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.421000 audit[4982]: CRED_DISP pid=4982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.430567 kernel: audit: type=1106 audit(1765667273.421:773): pid=4982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.430637 kernel: audit: type=1104 audit(1765667273.421:774): pid=4982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.436125 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:48476.service: Deactivated successfully. Dec 13 23:07:53.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:48476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.437930 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 23:07:53.439657 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Dec 13 23:07:53.442171 systemd-logind[1585]: Removed session 11. Dec 13 23:07:53.442791 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:48488.service - OpenSSH per-connection server daemon (10.0.0.1:48488). Dec 13 23:07:53.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:48488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.510000 audit[5001]: USER_ACCT pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.511967 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 48488 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:07:53.511000 audit[5001]: CRED_ACQ pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.511000 audit[5001]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfab8990 a2=3 a3=0 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:53.511000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:53.513962 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:07:53.518073 systemd-logind[1585]: New session 12 of user core. Dec 13 23:07:53.524353 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 23:07:53.525000 audit[5001]: USER_START pid=5001 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.526000 audit[5005]: CRED_ACQ pid=5005 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.653319 sshd[5005]: Connection closed by 10.0.0.1 port 48488 Dec 13 23:07:53.653792 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:53.654000 audit[5001]: USER_END pid=5001 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.655000 audit[5001]: CRED_DISP pid=5001 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:48488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.666163 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:48488.service: Deactivated successfully. Dec 13 23:07:53.670612 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 23:07:53.671598 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Dec 13 23:07:53.675509 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:48502.service - OpenSSH per-connection server daemon (10.0.0.1:48502). Dec 13 23:07:53.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:48502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.679408 systemd-logind[1585]: Removed session 12. Dec 13 23:07:53.735283 containerd[1603]: time="2025-12-13T23:07:53.735234275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 23:07:53.738000 audit[5016]: USER_ACCT pid=5016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.740278 sshd[5016]: Accepted publickey for core from 10.0.0.1 port 48502 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:07:53.740000 audit[5016]: CRED_ACQ pid=5016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.740000 audit[5016]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf8e2c10 a2=3 a3=0 items=0 ppid=1 pid=5016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:53.740000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:53.742165 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:07:53.746137 systemd-logind[1585]: New session 13 of user core. Dec 13 23:07:53.750334 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 23:07:53.753000 audit[5016]: USER_START pid=5016 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.755000 audit[5020]: CRED_ACQ pid=5020 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.896391 sshd[5020]: Connection closed by 10.0.0.1 port 48502 Dec 13 23:07:53.896610 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:53.898000 audit[5016]: USER_END pid=5016 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.898000 audit[5016]: CRED_DISP pid=5016 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:53.902809 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:48502.service: Deactivated successfully. Dec 13 23:07:53.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:48502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:53.905692 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 23:07:53.906836 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Dec 13 23:07:53.908816 systemd-logind[1585]: Removed session 13. Dec 13 23:07:54.027247 containerd[1603]: time="2025-12-13T23:07:54.027204935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:54.031415 containerd[1603]: time="2025-12-13T23:07:54.031371620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 23:07:54.031485 containerd[1603]: time="2025-12-13T23:07:54.031433586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:54.032506 kubelet[2771]: E1213 23:07:54.032464 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 23:07:54.032896 kubelet[2771]: E1213 23:07:54.032510 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 23:07:54.032896 kubelet[2771]: E1213 23:07:54.032627 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1b4a3e8666244488bf626e9588026acb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgzxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c669cdcf6-md9gr_calico-system(22a73c93-9cd2-420c-82af-38c36ee2bfd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:54.035388 containerd[1603]: time="2025-12-13T23:07:54.035359605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 23:07:54.247918 containerd[1603]: time="2025-12-13T23:07:54.247372166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:54.248516 containerd[1603]: time="2025-12-13T23:07:54.248480284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 23:07:54.248591 containerd[1603]: time="2025-12-13T23:07:54.248558492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:54.248778 kubelet[2771]: E1213 23:07:54.248697 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 23:07:54.248849 kubelet[2771]: E1213 23:07:54.248777 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 23:07:54.248954 kubelet[2771]: E1213 23:07:54.248914 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgzxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c669cdcf6-md9gr_calico-system(22a73c93-9cd2-420c-82af-38c36ee2bfd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:54.250135 kubelet[2771]: E1213 23:07:54.250076 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c669cdcf6-md9gr" podUID="22a73c93-9cd2-420c-82af-38c36ee2bfd3" Dec 13 23:07:57.734410 containerd[1603]: time="2025-12-13T23:07:57.733770235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 23:07:57.950706 containerd[1603]: time="2025-12-13T23:07:57.950654577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:57.951756 containerd[1603]: time="2025-12-13T23:07:57.951719284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 23:07:57.951815 containerd[1603]: time="2025-12-13T23:07:57.951759848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:57.952062 kubelet[2771]: E1213 23:07:57.952028 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:07:57.952668 kubelet[2771]: E1213 23:07:57.952408 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:07:57.952668 kubelet[2771]: E1213 23:07:57.952594 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st5j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-567b4f6b5-7hz8c_calico-apiserver(485d924b-b146-4fe9-a032-61945d861754): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:57.953806 kubelet[2771]: E1213 23:07:57.953758 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:07:58.733842 containerd[1603]: time="2025-12-13T23:07:58.733590213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 23:07:58.912565 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:48512.service - OpenSSH per-connection server daemon (10.0.0.1:48512). Dec 13 23:07:58.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:48512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:58.913510 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 23:07:58.913568 kernel: audit: type=1130 audit(1765667278.911:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:48512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:58.938801 containerd[1603]: time="2025-12-13T23:07:58.938633282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:07:58.939819 containerd[1603]: time="2025-12-13T23:07:58.939759033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 23:07:58.939879 containerd[1603]: time="2025-12-13T23:07:58.939825920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 23:07:58.940161 kubelet[2771]: E1213 23:07:58.940118 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 23:07:58.940222 kubelet[2771]: E1213 23:07:58.940172 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 23:07:58.940370 kubelet[2771]: E1213 23:07:58.940306 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8vrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69fcdcb775-d2n7t_calico-system(a7cced14-c53b-44b0-9445-694ed7cd5577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 23:07:58.941766 kubelet[2771]: E1213 23:07:58.941667 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:07:58.972000 audit[5040]: USER_ACCT pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:58.973689 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 48512 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:07:58.977479 kernel: audit: type=1101 audit(1765667278.972:795): pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:58.977563 kernel: audit: type=1103 audit(1765667278.976:796): pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:58.976000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:58.979023 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:07:58.982525 kernel: audit: type=1006 audit(1765667278.976:797): pid=5040 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 23:07:58.982590 kernel: audit: type=1300 audit(1765667278.976:797): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd7105a0 a2=3 a3=0 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:58.976000 audit[5040]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd7105a0 a2=3 a3=0 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:07:58.976000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:58.987137 kernel: audit: type=1327 audit(1765667278.976:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:07:58.988591 systemd-logind[1585]: New session 14 of user core. Dec 13 23:07:59.002282 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 23:07:59.003000 audit[5040]: USER_START pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.004000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.010862 kernel: audit: type=1105 audit(1765667279.003:798): pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.010953 kernel: audit: type=1103 audit(1765667279.004:799): pid=5044 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.101855 sshd[5044]: Connection closed by 10.0.0.1 port 48512 Dec 13 23:07:59.102140 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Dec 13 23:07:59.102000 audit[5040]: USER_END pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.106012 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:48512.service: Deactivated successfully. Dec 13 23:07:59.102000 audit[5040]: CRED_DISP pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.107780 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 23:07:59.110084 kernel: audit: type=1106 audit(1765667279.102:800): pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.110507 kernel: audit: type=1104 audit(1765667279.102:801): pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:07:59.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:48512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:07:59.110247 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Dec 13 23:07:59.111239 systemd-logind[1585]: Removed session 14. Dec 13 23:08:00.734159 containerd[1603]: time="2025-12-13T23:08:00.734120465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 23:08:00.942563 containerd[1603]: time="2025-12-13T23:08:00.942502642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:00.943542 containerd[1603]: time="2025-12-13T23:08:00.943471774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 23:08:00.943622 containerd[1603]: time="2025-12-13T23:08:00.943580545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:00.943956 kubelet[2771]: E1213 23:08:00.943784 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:08:00.943956 kubelet[2771]: E1213 23:08:00.943837 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:08:00.944446 kubelet[2771]: E1213 23:08:00.944373 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-567b4f6b5-cd8w4_calico-apiserver(e9f6be97-9073-49f4-b46f-97add2dc7d48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:00.945626 kubelet[2771]: E1213 23:08:00.945591 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:08:02.735010 containerd[1603]: time="2025-12-13T23:08:02.734817330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 23:08:02.934577 containerd[1603]: time="2025-12-13T23:08:02.934498434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:02.935610 containerd[1603]: time="2025-12-13T23:08:02.935563093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 23:08:02.935685 containerd[1603]: time="2025-12-13T23:08:02.935625619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:02.936064 kubelet[2771]: E1213 23:08:02.936018 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 23:08:02.936369 kubelet[2771]: E1213 23:08:02.936066 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 23:08:02.936369 kubelet[2771]: E1213 23:08:02.936283 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm69d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fcgnt_calico-system(749ee60f-8b5b-4a24-9e66-ab82e119fd2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:02.936999 containerd[1603]: time="2025-12-13T23:08:02.936967624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 23:08:02.938146 kubelet[2771]: E1213 23:08:02.938111 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c" Dec 13 23:08:03.145892 containerd[1603]: time="2025-12-13T23:08:03.145841639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:03.146965 containerd[1603]: time="2025-12-13T23:08:03.146920058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 23:08:03.147058 containerd[1603]: time="2025-12-13T23:08:03.147020667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:03.147248 kubelet[2771]: E1213 23:08:03.147195 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 23:08:03.147303 kubelet[2771]: E1213 23:08:03.147251 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 23:08:03.147422 kubelet[2771]: E1213 23:08:03.147373 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:03.152312 containerd[1603]: time="2025-12-13T23:08:03.152263148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 23:08:03.362559 containerd[1603]: time="2025-12-13T23:08:03.362511449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:03.363694 containerd[1603]: time="2025-12-13T23:08:03.363661715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 23:08:03.363776 containerd[1603]: time="2025-12-13T23:08:03.363742642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:03.363945 kubelet[2771]: E1213 23:08:03.363910 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 23:08:03.364009 kubelet[2771]: E1213 23:08:03.363963 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 23:08:03.364460 kubelet[2771]: E1213 23:08:03.364091 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:03.365731 kubelet[2771]: E1213 23:08:03.365581 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:08:04.127326 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:54008.service - OpenSSH per-connection server daemon (10.0.0.1:54008). Dec 13 23:08:04.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:54008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:04.130700 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 23:08:04.130772 kernel: audit: type=1130 audit(1765667284.127:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:54008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:04.194000 audit[5063]: USER_ACCT pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.197473 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 54008 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:04.199000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.200799 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:04.203007 kernel: audit: type=1101 audit(1765667284.194:804): pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.203055 kernel: audit: type=1103 audit(1765667284.199:805): pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.203258 kernel: audit: type=1006 audit(1765667284.199:806): pid=5063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 23:08:04.204818 kernel: audit: type=1300 audit(1765667284.199:806): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3369700 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:04.199000 audit[5063]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3369700 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:04.199000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:04.208521 systemd-logind[1585]: New session 15 of user core. Dec 13 23:08:04.209700 kernel: audit: type=1327 audit(1765667284.199:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:04.212315 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 23:08:04.214000 audit[5063]: USER_START pid=5063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.216000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.222579 kernel: audit: type=1105 audit(1765667284.214:807): pid=5063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.222671 kernel: audit: type=1103 audit(1765667284.216:808): pid=5067 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.308000 sshd[5067]: Connection closed by 10.0.0.1 port 54008 Dec 13 23:08:04.308549 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:04.310000 audit[5063]: USER_END pid=5063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.314317 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:54008.service: Deactivated successfully. Dec 13 23:08:04.310000 audit[5063]: CRED_DISP pid=5063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.317571 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 23:08:04.318702 kernel: audit: type=1106 audit(1765667284.310:809): pid=5063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.318797 kernel: audit: type=1104 audit(1765667284.310:810): pid=5063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:04.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:54008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:04.318657 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Dec 13 23:08:04.321639 systemd-logind[1585]: Removed session 15. Dec 13 23:08:08.733890 kubelet[2771]: E1213 23:08:08.733771 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:08:08.738041 kubelet[2771]: E1213 23:08:08.738003 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c669cdcf6-md9gr" podUID="22a73c93-9cd2-420c-82af-38c36ee2bfd3" Dec 13 23:08:09.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:54024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:09.323138 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:54024.service - OpenSSH per-connection server daemon (10.0.0.1:54024). Dec 13 23:08:09.327719 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 23:08:09.327825 kernel: audit: type=1130 audit(1765667289.322:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:54024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:09.380994 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 54024 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:09.379000 audit[5084]: USER_ACCT pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.383000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.386523 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:09.387970 kernel: audit: type=1101 audit(1765667289.379:813): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.388031 kernel: audit: type=1103 audit(1765667289.383:814): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.390139 kernel: audit: type=1006 audit(1765667289.383:815): pid=5084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 23:08:09.383000 audit[5084]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce5915d0 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:09.392927 systemd-logind[1585]: New session 16 of user core. Dec 13 23:08:09.393911 kernel: audit: type=1300 audit(1765667289.383:815): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce5915d0 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:09.393960 kernel: audit: type=1327 audit(1765667289.383:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:09.383000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:09.400328 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 23:08:09.401000 audit[5084]: USER_START pid=5084 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.403000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.410067 kernel: audit: type=1105 audit(1765667289.401:816): pid=5084 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.410147 kernel: audit: type=1103 audit(1765667289.403:817): pid=5088 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.504541 sshd[5088]: Connection closed by 10.0.0.1 port 54024 Dec 13 23:08:09.504728 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:09.505000 audit[5084]: USER_END pid=5084 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.505000 audit[5084]: CRED_DISP pid=5084 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.514128 kernel: audit: type=1106 audit(1765667289.505:818): pid=5084 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.514196 kernel: audit: type=1104 audit(1765667289.505:819): pid=5084 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:09.514436 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:54024.service: Deactivated successfully. Dec 13 23:08:09.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:54024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:09.516175 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 23:08:09.516892 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Dec 13 23:08:09.518661 systemd-logind[1585]: Removed session 16. Dec 13 23:08:09.734766 kubelet[2771]: E1213 23:08:09.734469 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:08:09.954404 kubelet[2771]: E1213 23:08:09.954365 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:08:12.736630 kubelet[2771]: E1213 23:08:12.736394 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:08:14.525094 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). Dec 13 23:08:14.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:42340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:14.529338 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 23:08:14.529453 kernel: audit: type=1130 audit(1765667294.524:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:42340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:14.599000 audit[5130]: USER_ACCT pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.601328 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:14.605137 kernel: audit: type=1101 audit(1765667294.599:822): pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.605222 kernel: audit: type=1103 audit(1765667294.603:823): pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.603000 audit[5130]: CRED_ACQ pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.605558 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:14.610026 kernel: audit: type=1006 audit(1765667294.603:824): pid=5130 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 23:08:14.610116 kernel: audit: type=1300 audit(1765667294.603:824): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3711500 a2=3 a3=0 items=0 ppid=1 pid=5130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:14.603000 audit[5130]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3711500 a2=3 a3=0 items=0 ppid=1 pid=5130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:14.610662 systemd-logind[1585]: New session 17 of user core. Dec 13 23:08:14.603000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:14.615890 kernel: audit: type=1327 audit(1765667294.603:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:14.621411 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 23:08:14.623000 audit[5130]: USER_START pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.624000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.631505 kernel: audit: type=1105 audit(1765667294.623:825): pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.631571 kernel: audit: type=1103 audit(1765667294.624:826): pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.825213 sshd[5134]: Connection closed by 10.0.0.1 port 42340 Dec 13 23:08:14.825420 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:14.826000 audit[5130]: USER_END pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.831000 audit[5130]: CRED_DISP pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.837626 kernel: audit: type=1106 audit(1765667294.826:827): pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.837706 kernel: audit: type=1104 audit(1765667294.831:828): pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.838662 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:42340.service: Deactivated successfully. Dec 13 23:08:14.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:42340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:14.841744 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 23:08:14.842592 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Dec 13 23:08:14.845318 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:42348.service - OpenSSH per-connection server daemon (10.0.0.1:42348). Dec 13 23:08:14.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:42348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:14.846809 systemd-logind[1585]: Removed session 17. Dec 13 23:08:14.911000 audit[5147]: USER_ACCT pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.913159 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 42348 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:14.912000 audit[5147]: CRED_ACQ pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.913000 audit[5147]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9fd9d50 a2=3 a3=0 items=0 ppid=1 pid=5147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:14.913000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:14.914838 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:14.919160 systemd-logind[1585]: New session 18 of user core. Dec 13 23:08:14.930351 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 23:08:14.932000 audit[5147]: USER_START pid=5147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:14.935000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.096217 sshd[5151]: Connection closed by 10.0.0.1 port 42348 Dec 13 23:08:15.097050 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:15.097000 audit[5147]: USER_END pid=5147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.097000 audit[5147]: CRED_DISP pid=5147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.109664 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:42348.service: Deactivated successfully. Dec 13 23:08:15.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:42348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:15.112486 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 23:08:15.113362 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Dec 13 23:08:15.115947 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:42358.service - OpenSSH per-connection server daemon (10.0.0.1:42358). Dec 13 23:08:15.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:42358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:15.117036 systemd-logind[1585]: Removed session 18. Dec 13 23:08:15.180000 audit[5165]: USER_ACCT pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.181753 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 42358 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:15.181000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.181000 audit[5165]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd85fde30 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:15.181000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:15.184236 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:15.188456 systemd-logind[1585]: New session 19 of user core. Dec 13 23:08:15.201312 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 23:08:15.203000 audit[5165]: USER_START pid=5165 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.204000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.735519 kubelet[2771]: E1213 23:08:15.735471 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:08:15.945000 audit[5183]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:08:15.945000 audit[5183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff9877020 a2=0 a3=1 items=0 ppid=2912 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:15.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:08:15.947814 sshd[5169]: Connection closed by 10.0.0.1 port 42358 Dec 13 23:08:15.949342 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:15.950000 audit[5165]: USER_END pid=5165 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.950000 audit[5165]: CRED_DISP pid=5165 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:15.951000 audit[5183]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:08:15.951000 audit[5183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff9877020 a2=0 a3=1 items=0 ppid=2912 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:15.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:08:15.957949 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:42358.service: Deactivated successfully. Dec 13 23:08:15.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:42358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:15.960696 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 23:08:15.961808 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Dec 13 23:08:15.966735 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:42370.service - OpenSSH per-connection server daemon (10.0.0.1:42370). Dec 13 23:08:15.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:42370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:15.969900 systemd-logind[1585]: Removed session 19. Dec 13 23:08:15.979000 audit[5190]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:08:15.979000 audit[5190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd1470aa0 a2=0 a3=1 items=0 ppid=2912 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:15.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:08:15.983000 audit[5190]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:08:15.983000 audit[5190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd1470aa0 a2=0 a3=1 items=0 ppid=2912 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:15.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:08:16.046523 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 42370 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:16.045000 audit[5189]: USER_ACCT pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.048000 audit[5189]: CRED_ACQ pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.051928 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:16.050000 audit[5189]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3e86b70 a2=3 a3=0 items=0 ppid=1 pid=5189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:16.050000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:16.057896 systemd-logind[1585]: New session 20 of user core. Dec 13 23:08:16.069686 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 23:08:16.071000 audit[5189]: USER_START pid=5189 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.073000 audit[5194]: CRED_ACQ pid=5194 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.381528 sshd[5194]: Connection closed by 10.0.0.1 port 42370 Dec 13 23:08:16.381551 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:16.384000 audit[5189]: USER_END pid=5189 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.384000 audit[5189]: CRED_DISP pid=5189 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.393667 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:42370.service: Deactivated successfully. Dec 13 23:08:16.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:42370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:16.397936 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 23:08:16.398938 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Dec 13 23:08:16.405333 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:42374.service - OpenSSH per-connection server daemon (10.0.0.1:42374). Dec 13 23:08:16.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:42374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:16.406343 systemd-logind[1585]: Removed session 20. Dec 13 23:08:16.467000 audit[5208]: USER_ACCT pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.469119 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 42374 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:16.468000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.469000 audit[5208]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff19de300 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:16.469000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:16.470893 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:16.475213 systemd-logind[1585]: New session 21 of user core. Dec 13 23:08:16.484351 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 23:08:16.486000 audit[5208]: USER_START pid=5208 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.487000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.580682 sshd[5212]: Connection closed by 10.0.0.1 port 42374 Dec 13 23:08:16.581285 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:16.581000 audit[5208]: USER_END pid=5208 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.581000 audit[5208]: CRED_DISP pid=5208 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:16.585911 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:42374.service: Deactivated successfully. Dec 13 23:08:16.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:42374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:16.589815 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 23:08:16.590666 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Dec 13 23:08:16.591565 systemd-logind[1585]: Removed session 21. Dec 13 23:08:18.734374 kubelet[2771]: E1213 23:08:18.734165 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c" Dec 13 23:08:20.735485 containerd[1603]: time="2025-12-13T23:08:20.735448278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 23:08:20.956977 containerd[1603]: time="2025-12-13T23:08:20.956767146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:20.958260 containerd[1603]: time="2025-12-13T23:08:20.958128622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 23:08:20.958338 containerd[1603]: time="2025-12-13T23:08:20.958200978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:20.958573 kubelet[2771]: E1213 23:08:20.958510 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:08:20.958573 kubelet[2771]: E1213 23:08:20.958567 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:08:20.959207 kubelet[2771]: E1213 23:08:20.958691 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st5j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-567b4f6b5-7hz8c_calico-apiserver(485d924b-b146-4fe9-a032-61945d861754): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:20.960021 kubelet[2771]: E1213 23:08:20.959965 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:08:21.596491 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:47260.service - OpenSSH per-connection server daemon (10.0.0.1:47260). Dec 13 23:08:21.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:47260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:21.597197 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 23:08:21.597305 kernel: audit: type=1130 audit(1765667301.595:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:47260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:21.654000 audit[5231]: USER_ACCT pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.656206 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 47260 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:21.658000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.660691 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:21.662323 kernel: audit: type=1101 audit(1765667301.654:871): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.662439 kernel: audit: type=1103 audit(1765667301.658:872): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.662462 kernel: audit: type=1006 audit(1765667301.658:873): pid=5231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 23:08:21.664023 kernel: audit: type=1300 audit(1765667301.658:873): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9d1ff90 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:21.658000 audit[5231]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9d1ff90 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:21.667090 systemd-logind[1585]: New session 22 of user core. Dec 13 23:08:21.667342 kernel: audit: type=1327 audit(1765667301.658:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:21.658000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:21.678332 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 23:08:21.680000 audit[5231]: USER_START pid=5231 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.684000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.689120 kernel: audit: type=1105 audit(1765667301.680:874): pid=5231 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.689509 kernel: audit: type=1103 audit(1765667301.684:875): pid=5235 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.736432 containerd[1603]: time="2025-12-13T23:08:21.736152473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 23:08:21.809212 sshd[5235]: Connection closed by 10.0.0.1 port 47260 Dec 13 23:08:21.808560 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:21.808000 audit[5231]: USER_END pid=5231 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.812766 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Dec 13 23:08:21.812888 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:47260.service: Deactivated successfully. Dec 13 23:08:21.809000 audit[5231]: CRED_DISP pid=5231 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.814582 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 23:08:21.816781 systemd-logind[1585]: Removed session 22. Dec 13 23:08:21.817405 kernel: audit: type=1106 audit(1765667301.808:876): pid=5231 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.817469 kernel: audit: type=1104 audit(1765667301.809:877): pid=5231 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:21.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:47260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:21.950171 containerd[1603]: time="2025-12-13T23:08:21.950038420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:21.951475 containerd[1603]: time="2025-12-13T23:08:21.951429660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 23:08:21.951537 containerd[1603]: time="2025-12-13T23:08:21.951463538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:21.951687 kubelet[2771]: E1213 23:08:21.951640 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 23:08:21.951742 kubelet[2771]: E1213 23:08:21.951698 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 23:08:21.951845 kubelet[2771]: E1213 23:08:21.951804 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1b4a3e8666244488bf626e9588026acb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgzxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c669cdcf6-md9gr_calico-system(22a73c93-9cd2-420c-82af-38c36ee2bfd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:21.954127 containerd[1603]: time="2025-12-13T23:08:21.953812921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 23:08:22.159118 containerd[1603]: time="2025-12-13T23:08:22.159035050Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:22.160312 containerd[1603]: time="2025-12-13T23:08:22.160257423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 23:08:22.160421 containerd[1603]: time="2025-12-13T23:08:22.160356738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:22.160641 kubelet[2771]: E1213 23:08:22.160544 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 23:08:22.160641 kubelet[2771]: E1213 23:08:22.160637 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 23:08:22.160911 kubelet[2771]: E1213 23:08:22.160759 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgzxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c669cdcf6-md9gr_calico-system(22a73c93-9cd2-420c-82af-38c36ee2bfd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:22.161978 kubelet[2771]: E1213 23:08:22.161933 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c669cdcf6-md9gr" podUID="22a73c93-9cd2-420c-82af-38c36ee2bfd3" Dec 13 23:08:23.345000 audit[5248]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:08:23.345000 audit[5248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffdb7d8e0 a2=0 a3=1 items=0 ppid=2912 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:23.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:08:23.353000 audit[5248]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 23:08:23.353000 audit[5248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffffdb7d8e0 a2=0 a3=1 items=0 ppid=2912 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:23.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 23:08:23.735469 containerd[1603]: time="2025-12-13T23:08:23.735333418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 23:08:23.947994 containerd[1603]: time="2025-12-13T23:08:23.947940040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:23.948933 containerd[1603]: time="2025-12-13T23:08:23.948883071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 23:08:23.949004 containerd[1603]: time="2025-12-13T23:08:23.948947788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:23.949307 kubelet[2771]: E1213 23:08:23.949263 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 23:08:23.949576 kubelet[2771]: E1213 23:08:23.949331 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 23:08:23.949576 kubelet[2771]: E1213 23:08:23.949500 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8vrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69fcdcb775-d2n7t_calico-system(a7cced14-c53b-44b0-9445-694ed7cd5577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:23.951132 kubelet[2771]: E1213 23:08:23.950815 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69fcdcb775-d2n7t" podUID="a7cced14-c53b-44b0-9445-694ed7cd5577" Dec 13 23:08:25.734362 containerd[1603]: time="2025-12-13T23:08:25.734315793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 23:08:25.948591 containerd[1603]: time="2025-12-13T23:08:25.948546691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:25.949565 containerd[1603]: time="2025-12-13T23:08:25.949528527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 23:08:25.949653 containerd[1603]: time="2025-12-13T23:08:25.949596604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:25.949795 kubelet[2771]: E1213 23:08:25.949756 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:08:25.950092 kubelet[2771]: E1213 23:08:25.949809 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 23:08:25.950092 kubelet[2771]: E1213 23:08:25.949933 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-567b4f6b5-cd8w4_calico-apiserver(e9f6be97-9073-49f4-b46f-97add2dc7d48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:25.951152 kubelet[2771]: E1213 23:08:25.951080 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-cd8w4" podUID="e9f6be97-9073-49f4-b46f-97add2dc7d48" Dec 13 23:08:26.829239 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 23:08:26.829374 kernel: audit: type=1130 audit(1765667306.825:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:47274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:26.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:47274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:26.826411 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:47274.service - OpenSSH per-connection server daemon (10.0.0.1:47274). Dec 13 23:08:26.890000 audit[5250]: USER_ACCT pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.891951 sshd[5250]: Accepted publickey for core from 10.0.0.1 port 47274 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:26.894000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.896330 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:26.898455 kernel: audit: type=1101 audit(1765667306.890:882): pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.898531 kernel: audit: type=1103 audit(1765667306.894:883): pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.898554 kernel: audit: type=1006 audit(1765667306.894:884): pid=5250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 23:08:26.894000 audit[5250]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb7cdf70 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:26.903544 kernel: audit: type=1300 audit(1765667306.894:884): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb7cdf70 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:26.894000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:26.903788 systemd-logind[1585]: New session 23 of user core. Dec 13 23:08:26.905243 kernel: audit: type=1327 audit(1765667306.894:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:26.908324 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 23:08:26.909000 audit[5250]: USER_START pid=5250 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.914000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.918746 kernel: audit: type=1105 audit(1765667306.909:885): pid=5250 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.918808 kernel: audit: type=1103 audit(1765667306.914:886): pid=5254 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.991672 sshd[5254]: Connection closed by 10.0.0.1 port 47274 Dec 13 23:08:26.992219 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:26.992000 audit[5250]: USER_END pid=5250 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.996875 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:47274.service: Deactivated successfully. Dec 13 23:08:26.992000 audit[5250]: CRED_DISP pid=5250 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.999319 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 23:08:27.000186 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Dec 13 23:08:27.001030 kernel: audit: type=1106 audit(1765667306.992:887): pid=5250 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:27.001372 kernel: audit: type=1104 audit(1765667306.992:888): pid=5250 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:26.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:47274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:27.001666 systemd-logind[1585]: Removed session 23. Dec 13 23:08:27.733787 containerd[1603]: time="2025-12-13T23:08:27.733751745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 23:08:27.946964 containerd[1603]: time="2025-12-13T23:08:27.946911398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:27.947851 containerd[1603]: time="2025-12-13T23:08:27.947821003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 23:08:27.947958 containerd[1603]: time="2025-12-13T23:08:27.947895720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:27.948076 kubelet[2771]: E1213 23:08:27.948039 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 23:08:27.948933 kubelet[2771]: E1213 23:08:27.948086 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 23:08:27.948933 kubelet[2771]: E1213 23:08:27.948227 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:27.950525 containerd[1603]: time="2025-12-13T23:08:27.950497180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 23:08:28.166030 containerd[1603]: time="2025-12-13T23:08:28.165936747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:28.166995 containerd[1603]: time="2025-12-13T23:08:28.166949111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 23:08:28.167068 containerd[1603]: time="2025-12-13T23:08:28.167015069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:28.167201 kubelet[2771]: E1213 23:08:28.167167 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 23:08:28.167245 kubelet[2771]: E1213 23:08:28.167214 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 23:08:28.167377 kubelet[2771]: E1213 23:08:28.167342 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8xf56_calico-system(68e4691c-7de1-4668-91bc-eef5c31432bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:28.168489 kubelet[2771]: E1213 23:08:28.168447 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8xf56" podUID="68e4691c-7de1-4668-91bc-eef5c31432bb" Dec 13 23:08:32.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:39614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:32.004926 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:39614.service - OpenSSH per-connection server daemon (10.0.0.1:39614). Dec 13 23:08:32.009136 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 23:08:32.009204 kernel: audit: type=1130 audit(1765667312.004:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:39614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:32.056000 audit[5270]: USER_ACCT pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.056408 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 39614 ssh2: RSA SHA256:TlRF5BBjRhguf3xLXDGh2CyW2nNdLq96WVT41Xx9kNw Dec 13 23:08:32.060000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.061561 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:08:32.064034 kernel: audit: type=1101 audit(1765667312.056:891): pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.065152 kernel: audit: type=1103 audit(1765667312.060:892): pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.065200 kernel: audit: type=1006 audit(1765667312.060:893): pid=5270 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 23:08:32.060000 audit[5270]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd67abdf0 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:32.069644 kernel: audit: type=1300 audit(1765667312.060:893): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd67abdf0 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:08:32.060000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:32.071223 kernel: audit: type=1327 audit(1765667312.060:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 23:08:32.074189 systemd-logind[1585]: New session 24 of user core. Dec 13 23:08:32.086151 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 23:08:32.088000 audit[5270]: USER_START pid=5270 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.093143 kernel: audit: type=1105 audit(1765667312.088:894): pid=5270 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.093000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.097144 kernel: audit: type=1103 audit(1765667312.093:895): pid=5274 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.168192 sshd[5274]: Connection closed by 10.0.0.1 port 39614 Dec 13 23:08:32.168462 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Dec 13 23:08:32.169000 audit[5270]: USER_END pid=5270 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.172451 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Dec 13 23:08:32.172671 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:39614.service: Deactivated successfully. Dec 13 23:08:32.169000 audit[5270]: CRED_DISP pid=5270 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.177140 kernel: audit: type=1106 audit(1765667312.169:896): pid=5270 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.177199 kernel: audit: type=1104 audit(1765667312.169:897): pid=5270 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 23:08:32.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:39614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:08:32.177316 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 23:08:32.179012 systemd-logind[1585]: Removed session 24. Dec 13 23:08:33.734148 kubelet[2771]: E1213 23:08:33.734054 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567b4f6b5-7hz8c" podUID="485d924b-b146-4fe9-a032-61945d861754" Dec 13 23:08:33.735921 kubelet[2771]: E1213 23:08:33.734642 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c669cdcf6-md9gr" podUID="22a73c93-9cd2-420c-82af-38c36ee2bfd3" Dec 13 23:08:33.737380 containerd[1603]: time="2025-12-13T23:08:33.737339965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 23:08:33.929519 containerd[1603]: time="2025-12-13T23:08:33.929471702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 23:08:33.930505 containerd[1603]: time="2025-12-13T23:08:33.930474760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 23:08:33.930585 containerd[1603]: time="2025-12-13T23:08:33.930553478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 23:08:33.931033 kubelet[2771]: E1213 23:08:33.930765 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 23:08:33.931033 kubelet[2771]: E1213 23:08:33.930821 2771 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 23:08:33.931033 kubelet[2771]: E1213 23:08:33.930965 2771 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm69d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fcgnt_calico-system(749ee60f-8b5b-4a24-9e66-ab82e119fd2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 23:08:33.932114 kubelet[2771]: E1213 23:08:33.932076 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fcgnt" podUID="749ee60f-8b5b-4a24-9e66-ab82e119fd2c"